var/home/core/zuul-output/0000755000175000017500000000000015112163766014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015112170323015464 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000021347355115112170314017703 0ustar rootrootAug 13 19:43:52 crc systemd[1]: Starting Kubernetes Kubelet... Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 19:43:54 crc kubenswrapper[4183]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.177165 4183 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182423 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182470 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182483 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182492 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182501 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182509 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182517 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182526 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182534 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182542 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182551 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182559 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182567 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182576 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182584 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182592 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182600 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182608 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182617 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182624 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182633 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182641 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182650 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182658 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182666 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182733 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182748 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182757 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182765 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182858 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182874 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182883 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182891 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182900 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182908 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182918 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182926 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182934 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182943 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182951 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182959 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182967 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182975 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182984 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.182992 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183018 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183026 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183034 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183042 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183051 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183060 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183069 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183078 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183088 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183097 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183107 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183116 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183125 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183134 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.183145 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183412 4183 flags.go:64] FLAG: --address="0.0.0.0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183522 4183 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183535 4183 flags.go:64] FLAG: --anonymous-auth="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183543 4183 flags.go:64] FLAG: --application-metrics-count-limit="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183609 4183 flags.go:64] FLAG: --authentication-token-webhook="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183620 4183 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183630 4183 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183638 4183 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183645 4183 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183652 4183 flags.go:64] FLAG: --azure-container-registry-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183659 4183 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183667 4183 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183679 4183 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183688 4183 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183695 4183 flags.go:64] FLAG: --cgroup-root="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183701 4183 flags.go:64] FLAG: --cgroups-per-qos="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183708 4183 flags.go:64] FLAG: --client-ca-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183715 4183 flags.go:64] FLAG: --cloud-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183721 4183 flags.go:64] FLAG: --cloud-provider="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183727 4183 flags.go:64] FLAG: --cluster-dns="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183740 4183 flags.go:64] FLAG: --cluster-domain="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183750 4183 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183757 4183 flags.go:64] FLAG: --config-dir="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183764 4183 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183771 4183 flags.go:64] FLAG: --container-log-max-files="5" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183835 4183 flags.go:64] FLAG: --container-log-max-size="10Mi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183849 4183 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183858 4183 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183865 4183 flags.go:64] FLAG: --containerd-namespace="k8s.io" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183872 4183 flags.go:64] FLAG: --contention-profiling="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183879 4183 flags.go:64] FLAG: --cpu-cfs-quota="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183886 4183 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183893 4183 flags.go:64] FLAG: --cpu-manager-policy="none" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183904 4183 flags.go:64] FLAG: --cpu-manager-policy-options="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183916 4183 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183923 4183 flags.go:64] FLAG: --enable-controller-attach-detach="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183929 4183 flags.go:64] FLAG: --enable-debugging-handlers="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183939 4183 flags.go:64] FLAG: --enable-load-reader="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183946 4183 flags.go:64] FLAG: --enable-server="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183953 4183 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183970 4183 flags.go:64] FLAG: --event-burst="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183978 4183 flags.go:64] FLAG: --event-qps="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183984 4183 flags.go:64] FLAG: --event-storage-age-limit="default=0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183992 4183 flags.go:64] FLAG: --event-storage-event-limit="default=0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.183998 4183 flags.go:64] FLAG: --eviction-hard="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184007 4183 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184013 4183 flags.go:64] FLAG: --eviction-minimum-reclaim="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184024 4183 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184035 4183 flags.go:64] FLAG: --eviction-soft="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184043 4183 flags.go:64] FLAG: --eviction-soft-grace-period="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184051 4183 flags.go:64] FLAG: --exit-on-lock-contention="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184058 4183 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184067 4183 flags.go:64] FLAG: --experimental-mounter-path="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184075 4183 flags.go:64] FLAG: --fail-swap-on="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184083 4183 flags.go:64] FLAG: --feature-gates="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184100 4183 flags.go:64] FLAG: --file-check-frequency="20s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184107 4183 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184114 4183 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184121 4183 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184128 4183 flags.go:64] FLAG: --healthz-port="10248" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184136 4183 flags.go:64] FLAG: --help="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184143 4183 flags.go:64] FLAG: --hostname-override="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184157 4183 flags.go:64] FLAG: --housekeeping-interval="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184164 4183 flags.go:64] FLAG: --http-check-frequency="20s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184171 4183 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184177 4183 flags.go:64] FLAG: --image-credential-provider-config="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184183 4183 flags.go:64] FLAG: --image-gc-high-threshold="85" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184190 4183 flags.go:64] FLAG: --image-gc-low-threshold="80" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184270 4183 flags.go:64] FLAG: --image-service-endpoint="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184285 4183 flags.go:64] FLAG: --iptables-drop-bit="15" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184301 4183 flags.go:64] FLAG: --iptables-masquerade-bit="14" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184308 4183 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184315 4183 flags.go:64] FLAG: --kernel-memcg-notification="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184323 4183 flags.go:64] FLAG: --kube-api-burst="100" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184330 4183 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184336 4183 flags.go:64] FLAG: --kube-api-qps="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184342 4183 flags.go:64] FLAG: --kube-reserved="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184355 4183 flags.go:64] FLAG: --kube-reserved-cgroup="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184366 4183 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184373 4183 flags.go:64] FLAG: --kubelet-cgroups="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184380 4183 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184387 4183 flags.go:64] FLAG: --lock-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184394 4183 flags.go:64] FLAG: --log-cadvisor-usage="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184401 4183 flags.go:64] FLAG: --log-flush-frequency="5s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184408 4183 flags.go:64] FLAG: --log-json-info-buffer-size="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184432 4183 flags.go:64] FLAG: --log-json-split-stream="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184440 4183 flags.go:64] FLAG: --logging-format="text" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184446 4183 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184455 4183 flags.go:64] FLAG: --make-iptables-util-chains="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184462 4183 flags.go:64] FLAG: --manifest-url="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184468 4183 flags.go:64] FLAG: --manifest-url-header="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184486 4183 flags.go:64] FLAG: --max-housekeeping-interval="15s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184493 4183 flags.go:64] FLAG: --max-open-files="1000000" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184502 4183 flags.go:64] FLAG: --max-pods="110" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184508 4183 flags.go:64] FLAG: --maximum-dead-containers="-1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184516 4183 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184523 4183 flags.go:64] FLAG: --memory-manager-policy="None" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184529 4183 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184541 4183 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184550 4183 flags.go:64] FLAG: --node-ip="192.168.126.11" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184557 4183 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184575 4183 flags.go:64] FLAG: --node-status-max-images="50" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184581 4183 flags.go:64] FLAG: --node-status-update-frequency="10s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184588 4183 flags.go:64] FLAG: --oom-score-adj="-999" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184595 4183 flags.go:64] FLAG: --pod-cidr="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184611 4183 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184623 4183 flags.go:64] FLAG: --pod-manifest-path="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184630 4183 flags.go:64] FLAG: --pod-max-pids="-1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184637 4183 flags.go:64] FLAG: --pods-per-core="0" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184644 4183 flags.go:64] FLAG: --port="10250" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184650 4183 flags.go:64] FLAG: --protect-kernel-defaults="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184657 4183 flags.go:64] FLAG: --provider-id="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184670 4183 flags.go:64] FLAG: --qos-reserved="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184681 4183 flags.go:64] FLAG: --read-only-port="10255" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184687 4183 flags.go:64] FLAG: --register-node="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184694 4183 flags.go:64] FLAG: --register-schedulable="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184701 4183 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184712 4183 flags.go:64] FLAG: --registry-burst="10" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184722 4183 flags.go:64] FLAG: --registry-qps="5" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184737 4183 flags.go:64] FLAG: --reserved-cpus="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184744 4183 flags.go:64] FLAG: --reserved-memory="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184752 4183 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184759 4183 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184765 4183 flags.go:64] FLAG: --rotate-certificates="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184878 4183 flags.go:64] FLAG: --rotate-server-certificates="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184890 4183 flags.go:64] FLAG: --runonce="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184903 4183 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184912 4183 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184919 4183 flags.go:64] FLAG: --seccomp-default="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184926 4183 flags.go:64] FLAG: --serialize-image-pulls="true" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184933 4183 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184940 4183 flags.go:64] FLAG: --storage-driver-db="cadvisor" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184947 4183 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184953 4183 flags.go:64] FLAG: --storage-driver-password="root" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184973 4183 flags.go:64] FLAG: --storage-driver-secure="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184982 4183 flags.go:64] FLAG: --storage-driver-table="stats" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184989 4183 flags.go:64] FLAG: --storage-driver-user="root" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.184996 4183 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185003 4183 flags.go:64] FLAG: --sync-frequency="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185010 4183 flags.go:64] FLAG: --system-cgroups="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185017 4183 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185038 4183 flags.go:64] FLAG: --system-reserved-cgroup="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185045 4183 flags.go:64] FLAG: --tls-cert-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185052 4183 flags.go:64] FLAG: --tls-cipher-suites="[]" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185059 4183 flags.go:64] FLAG: --tls-min-version="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185068 4183 flags.go:64] FLAG: --tls-private-key-file="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185074 4183 flags.go:64] FLAG: --topology-manager-policy="none" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185081 4183 flags.go:64] FLAG: --topology-manager-policy-options="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185087 4183 flags.go:64] FLAG: --topology-manager-scope="container" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185102 4183 flags.go:64] FLAG: --v="2" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185116 4183 flags.go:64] FLAG: --version="false" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185124 4183 flags.go:64] FLAG: --vmodule="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185131 4183 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185139 4183 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185244 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185258 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185265 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185272 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185280 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185295 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185307 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185314 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185322 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185329 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185337 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185344 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185354 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185369 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185375 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185381 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185387 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185394 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185400 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185406 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185411 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185423 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185432 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185438 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185444 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185450 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185455 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185463 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185470 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185476 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185482 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185494 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185500 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185506 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185513 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185520 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185527 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185537 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185545 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185552 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185559 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185566 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185573 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185581 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185592 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185600 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185607 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185615 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185622 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185630 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185636 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185642 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185647 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185655 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185661 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185667 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185673 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185678 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185684 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.185690 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.185698 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.214743 4183 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.214852 4183 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214895 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214906 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214914 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214922 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214932 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214940 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214947 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214955 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214962 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214970 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214978 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.214986 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215020 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215030 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215038 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215047 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215054 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215064 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215070 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215077 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215084 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215091 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215098 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215106 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215113 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215120 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215127 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215136 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215145 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215154 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215162 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215171 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215180 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215188 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215232 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215247 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215255 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215263 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215272 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215279 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215288 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215296 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215305 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215313 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215321 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215333 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215341 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215348 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215357 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215365 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215373 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215382 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215390 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215399 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215407 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215416 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215424 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215432 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215440 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215449 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.215458 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215645 4183 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215660 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215669 4183 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215678 4183 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215686 4183 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215695 4183 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215703 4183 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215712 4183 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215719 4183 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215727 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215736 4183 feature_gate.go:227] unrecognized feature gate: GatewayAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215744 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215754 4183 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215763 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215832 4183 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215847 4183 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215855 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215864 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215873 4183 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215881 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215889 4183 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215897 4183 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215904 4183 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215913 4183 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215921 4183 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215929 4183 feature_gate.go:227] unrecognized feature gate: ImagePolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215937 4183 feature_gate.go:227] unrecognized feature gate: NewOLM Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215946 4183 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215954 4183 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215962 4183 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215971 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215979 4183 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215987 4183 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.215996 4183 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216004 4183 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216012 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216021 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216029 4183 feature_gate.go:227] unrecognized feature gate: MetricsServer Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216038 4183 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216048 4183 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216056 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216064 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216073 4183 feature_gate.go:227] unrecognized feature gate: Example Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216081 4183 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216089 4183 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216098 4183 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216106 4183 feature_gate.go:227] unrecognized feature gate: PinnedImages Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216114 4183 feature_gate.go:227] unrecognized feature gate: PlatformOperators Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216122 4183 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216130 4183 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216141 4183 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216149 4183 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216160 4183 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216169 4183 feature_gate.go:227] unrecognized feature gate: SignatureStores Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216177 4183 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216185 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216227 4183 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216244 4183 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216252 4183 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.216261 4183 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.216270 4183 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.218639 4183 server.go:925] "Client rotation is on, will bootstrap in background" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.261135 4183 bootstrap.go:266] part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-06-27 13:02:31 +0000 UTC Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.264516 4183 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.268356 4183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.269062 4183 server.go:982] "Starting client certificate rotation" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.269322 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.270038 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.305247 4183 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.348409 4183 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.354284 4183 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.355040 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.383335 4183 remote_runtime.go:143] "Validated CRI v1 runtime API" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.383439 4183 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.423604 4183 remote_image.go:111] "Validated CRI v1 image API" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.436425 4183 fs.go:132] Filesystem UUIDs: map[68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.436494 4183 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm major:0 minor:43 fsType:tmpfs blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/40b1512db3f1e3b7db43a52c25ec16b90b1a271577cfa32a91a92a335a6d73c5/merged major:0 minor:44 fsType:overlay blockSize:0}] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.453677 4183 manager.go:217] Machine: {Timestamp:2025-08-13 19:43:54.449606963 +0000 UTC m=+1.142271741 CPUVendorID:AuthenticAMD NumCores:6 NumPhysicalCores:1 NumSockets:6 CpuFrequency:2800000 MemoryCapacity:14635360256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:b5eaf2e9-3c86-474e-aca5-bab262204689 BootID:7bac8de7-aad0-4ed8-a9ad-c4391f6449b7 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:1463533568 Type:vfs Inodes:357308 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/b56e232756d61ee2b06c4c940f94dd2d9c1c6744eb2ba718b704bda5002ffdcc/userdata/shm DeviceMajor:0 DeviceMinor:43 Capacity:65536000 Type:vfs Inodes:1786543 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:85294297088 Type:vfs Inodes:41680368 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:7317680128 Type:vfs Inodes:1786543 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:2927075328 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680368 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:7317680128 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:85899345920 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:52:fd:fc:07:21:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:enp2s0 MacAddress:52:fd:fc:07:21:82 Speed:-1 Mtu:1500} {Name:eth10 MacAddress:c2:6f:cd:56:e0:cc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e6:a9:95:66:6b:74 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:14635360256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.455115 4183 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.455278 4183 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.464008 4183 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465562 4183 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465947 4183 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.465986 4183 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.466525 4183 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.468951 4183 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.470533 4183 state_mem.go:36] "Initialized new in-memory state store" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.471372 4183 server.go:1227] "Using root directory" path="/var/lib/kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.474413 4183 kubelet.go:406] "Attempting to sync node with API server" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.474458 4183 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.475131 4183 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.475372 4183 kubelet.go:322] "Adding apiserver pod source" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.476751 4183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.481718 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.482235 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.482139 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.482302 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.485825 4183 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.492543 4183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.493577 4183 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495264 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495561 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495608 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495724 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495888 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.495980 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496094 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496285 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496379 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496398 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496535 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496614 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496656 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496880 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.496980 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.497815 4183 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.500830 4183 server.go:1262] "Started kubelet" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.502655 4183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.502841 4183 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.500836 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc systemd[1]: Started Kubernetes Kubelet. Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.506975 4183 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.517440 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.518906 4183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.525606 4183 server.go:461] "Adding debug handlers to kubelet server" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.660549 4183 volume_manager.go:289] "The desired_state_of_world populator starts" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.660966 4183 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.670638 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 19:43:54 crc kubenswrapper[4183]: W0813 19:43:54.675547 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.675645 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.676413 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.676439 4183 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718166 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718472 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718503 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718520 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718535 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718551 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718566 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718582 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718598 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718624 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718642 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718670 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718691 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718713 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718729 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718756 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718823 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718855 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718875 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.718988 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719013 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719030 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719048 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719074 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719094 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719113 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719138 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719156 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719243 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719274 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719293 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719332 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719360 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719377 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719410 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719437 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719456 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719472 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719488 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719513 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719531 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719545 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719561 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719607 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719624 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719640 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719670 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719690 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719724 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719743 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719758 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.719987 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720022 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720039 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720066 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720083 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720101 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720124 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720150 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720166 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720221 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720241 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720266 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720284 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720304 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720325 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720340 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720357 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720371 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720384 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720396 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720411 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720438 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720451 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720465 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.720483 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.726965 4183 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727094 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727112 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727125 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727143 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727157 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727170 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727282 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727302 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727318 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727331 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727353 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727366 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727379 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727509 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727526 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727582 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727599 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727618 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727635 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727648 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727667 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727680 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727693 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727706 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="378552fd-5e53-4882-87ff-95f3d9198861" volumeName="kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727723 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727741 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727754 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727767 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727839 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727855 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727878 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727890 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727902 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727924 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727936 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727948 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727960 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727977 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.727993 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728005 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728016 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728033 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="af6b67a3-a2bd-4051-9adc-c208a5a65d79" volumeName="kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728049 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728062 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728074 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728086 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728502 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728516 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728528 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728546 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728562 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728575 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728596 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728609 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728620 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728631 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728643 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728654 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728665 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728681 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728697 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728708 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728729 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728742 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728754 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728766 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728871 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728892 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728904 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" volumeName="kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728921 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728935 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728950 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728962 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728973 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728985 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.728997 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729010 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729022 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729045 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729058 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729071 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729084 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729565 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729583 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729595 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729607 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729619 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729633 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729644 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729656 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729669 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729686 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729701 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729714 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729732 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729748 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729761 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729817 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729836 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729852 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729870 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729883 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729895 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729909 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729922 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="87df87f4-ba66-4137-8e41-1fa632ad4207" volumeName="kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729934 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729946 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.729959 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" volumeName="kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730684 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730704 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730716 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730733 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730748 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730760 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13ad7555-5f28-4555-a563-892713a8433a" volumeName="kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.730994 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731015 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731032 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731056 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731075 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731088 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" volumeName="kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731103 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731115 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731133 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731150 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731163 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731241 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731260 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731276 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731296 4183 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731398 4183 reconstruct_new.go:102] "Volume reconstruction finished" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.731411 4183 reconciler_new.go:29] "Reconciler: start to sync state" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.760614 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.765043 4183 container_manager_linux.go:884] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775241 4183 factory.go:55] Registering systemd factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775368 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775770 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.775873 4183 factory.go:221] Registration of the systemd container factory successfully Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.776145 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.779389 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.779986 4183 factory.go:153] Registering CRI-O factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780147 4183 factory.go:221] Registration of the crio container factory successfully Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780616 4183 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.780912 4183 factory.go:103] Registering Raw factory Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.781217 4183 manager.go:1196] Started watching for new ooms in manager Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.782546 4183 manager.go:319] Starting recovery of all containers Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.836554 4183 manager.go:324] Recovery completed Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.856954 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.858742 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:54 crc kubenswrapper[4183]: E0813 19:43:54.878047 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 19:43:54 crc kubenswrapper[4183]: I0813 19:43:54.980529 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024187 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024243 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.024710 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.026755 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029064 4183 cpu_manager.go:215] "Starting CPU manager" policy="none" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029249 4183 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.029599 4183 state_mem.go:36] "Initialized new in-memory state store" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.046027 4183 policy_none.go:49] "None policy: Start" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.048422 4183 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.048995 4183 state_mem.go:35] "Initializing new in-memory state store" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.152712 4183 manager.go:296] "Starting Device Plugin manager" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.153754 4183 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.154469 4183 server.go:79] "Starting device plugin registration server" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.159564 4183 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.160021 4183 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.160109 4183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.203607 4183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207046 4183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207448 4183 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.207823 4183 kubelet.go:2343] "Starting kubelet main sync loop" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.208236 4183 kubelet.go:2367] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.221281 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.221355 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.280947 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.309413 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.310904 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.312723 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.317428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.319511 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.319642 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.323652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.324535 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329259 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329319 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329356 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.329377 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330172 4183 topology_manager.go:215] "Topology Admit Handler" podUID="53c1db1508241fbac1bedf9130341ffe" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.330667 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332452 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332629 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.332661 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.333185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.333258 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334431 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.334444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335632 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335705 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335771 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.335860 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336747 4183 topology_manager.go:215] "Topology Admit Handler" podUID="631cdb37fbb54e809ecc5e719aebd371" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.336897 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.337520 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340045 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340131 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.340446 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.402456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.404278 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405176 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.405191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.427930 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429816 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.429912 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.431407 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458478 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458898 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.458984 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459010 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459030 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459062 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459083 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459104 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459122 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459251 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459318 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459384 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459415 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459465 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.459494 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.506240 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.537648 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.537744 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561519 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561715 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561850 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561916 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.561980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562414 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562520 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562569 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562757 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562826 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562873 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562900 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562923 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562945 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562969 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562977 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.562990 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.563241 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.664890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.688244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.699689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.729881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: I0813 19:43:55.738024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.755628 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.755711 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.771301 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53c1db1508241fbac1bedf9130341ffe.slice/crio-e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 WatchSource:0}: Error finding container e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83: Status 404 returned error can't find the container with id e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.775105 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b WatchSource:0}: Error finding container 410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b: Status 404 returned error can't find the container with id 410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.776442 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc WatchSource:0}: Error finding container b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc: Status 404 returned error can't find the container with id b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.799304 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: E0813 19:43:55.799427 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:55 crc kubenswrapper[4183]: W0813 19:43:55.800647 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb2b200bca0d10cf0fe16fb7c0caf80.slice/crio-f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29 WatchSource:0}: Error finding container f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29: Status 404 returned error can't find the container with id f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29 Aug 13 19:43:56 crc kubenswrapper[4183]: W0813 19:43:56.069422 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.069914 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.082587 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.227474 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.229358 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"b55571250f9ecd41f6aecef022adaa7dfc487a62d8b3c48363ff694df16723fc"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.230869 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"410a136ab4d60a86c7b8b3d5f28a28bd1118455ff54525a3bc99a50a4ad5a66b"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.232052 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234221 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234239 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.234266 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.235577 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"f37d107ed757bb5270315ab709945eb5fc67489de969c3be9362d277114d8d29"} Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.235746 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.237420 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9"} Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.451076 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:43:56 crc kubenswrapper[4183]: E0813 19:43:56.455457 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:56 crc kubenswrapper[4183]: I0813 19:43:56.508515 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: W0813 19:43:57.317931 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.318144 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.509595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: W0813 19:43:57.628935 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.629006 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.685165 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.836113 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839094 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839177 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839196 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:57 crc kubenswrapper[4183]: I0813 19:43:57.839229 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:43:57 crc kubenswrapper[4183]: E0813 19:43:57.840852 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249354 4183 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249430 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.249608 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251225 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.251241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.266930 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.266977 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.269747 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.269973 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerDied","Data":"d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.270197 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.271762 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.271931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.272147 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276167 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.276473 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.287260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.291941 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293247 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.293259 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294336 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6" exitCode=0 Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294394 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6"} Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.294503 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313410 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.313425 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:43:58 crc kubenswrapper[4183]: I0813 19:43:58.505669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: W0813 19:43:58.854605 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: E0813 19:43:58.855205 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: W0813 19:43:58.867610 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:58 crc kubenswrapper[4183]: E0813 19:43:58.867659 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:43:59 crc kubenswrapper[4183]: I0813 19:43:59.324418 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} Aug 13 19:43:59 crc kubenswrapper[4183]: I0813 19:43:59.507149 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.410433 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.466757 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.467072 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471277 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.471297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.486883 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.487041 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.492989 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.505078 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52"} Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.505299 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.577033 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.590270 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:00 crc kubenswrapper[4183]: I0813 19:44:00.720716 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.723203 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:00 crc kubenswrapper[4183]: E0813 19:44:00.887637 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.041735 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044357 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.044544 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:01 crc kubenswrapper[4183]: E0813 19:44:01.046129 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.130.11:6443: connect: connection refused" node="crc" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.510569 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.520531 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2"} Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545127 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0" exitCode=0 Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545242 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0"} Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.545204 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547827 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.547851 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.558076 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.564287 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.564398 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.565986 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566213 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566227 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:01 crc kubenswrapper[4183]: I0813 19:44:01.566256 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:01 crc kubenswrapper[4183]: W0813 19:44:01.898722 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:01 crc kubenswrapper[4183]: E0813 19:44:01.898960 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.510177 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.588563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff"} Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.588662 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601242 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:02 crc kubenswrapper[4183]: I0813 19:44:02.601355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:02 crc kubenswrapper[4183]: W0813 19:44:02.882299 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:02 crc kubenswrapper[4183]: E0813 19:44:02.882601 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: W0813 19:44:03.445602 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: E0813 19:44:03.445714 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.617916 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325"} Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.636725 4183 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73" exitCode=0 Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.637116 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73"} Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.637226 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641454 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:03 crc kubenswrapper[4183]: I0813 19:44:03.641475 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.643619 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"631cdb37fbb54e809ecc5e719aebd371","Type":"ContainerStarted","Data":"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e"} Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.643721 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645099 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645124 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:04 crc kubenswrapper[4183]: I0813 19:44:04.645135 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:05 crc kubenswrapper[4183]: E0813 19:44:05.404914 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.651064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd"} Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660344 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a"} Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660370 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.660455 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661600 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:05 crc kubenswrapper[4183]: I0813 19:44:05.661856 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.699489 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff"} Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.700288 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.701949 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.702080 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.702100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.709009 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.709489 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c"} Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710124 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:06 crc kubenswrapper[4183]: I0813 19:44:06.710226 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.447444 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449427 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449443 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.449484 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.563401 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.705518 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.705957 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709252 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709310 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.709334 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.726474 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15"} Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.726614 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.728519 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.729063 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.729094 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:07 crc kubenswrapper[4183]: I0813 19:44:07.746001 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743550 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743552 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44"} Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.743630 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.744334 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746270 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.746349 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747251 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747304 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747831 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.747853 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.750507 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:08 crc kubenswrapper[4183]: I0813 19:44:08.905078 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.008274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.358473 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.581161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746214 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.746313 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748316 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748336 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748365 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748395 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748407 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:09 crc kubenswrapper[4183]: I0813 19:44:09.748464 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.748543 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.748652 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.749968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750022 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750040 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:10 crc kubenswrapper[4183]: I0813 19:44:10.750296 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.169892 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.170071 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171882 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:11 crc kubenswrapper[4183]: I0813 19:44:11.171944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:12 crc kubenswrapper[4183]: I0813 19:44:12.581168 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:12 crc kubenswrapper[4183]: I0813 19:44:12.582219 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:13 crc kubenswrapper[4183]: W0813 19:44:13.494495 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.495229 4183 trace.go:236] Trace[777984701]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:44:03.491) (total time: 10003ms): Aug 13 19:44:13 crc kubenswrapper[4183]: Trace[777984701]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:44:13.494) Aug 13 19:44:13 crc kubenswrapper[4183]: Trace[777984701]: [10.003254671s] [10.003254671s] END Aug 13 19:44:13 crc kubenswrapper[4183]: E0813 19:44:13.495274 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.510042 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": net/http: TLS handshake timeout Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.524599 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.524771 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:13 crc kubenswrapper[4183]: I0813 19:44:13.526958 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:15 crc kubenswrapper[4183]: E0813 19:44:15.406986 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:17 crc kubenswrapper[4183]: E0813 19:44:17.290252 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Aug 13 19:44:17 crc kubenswrapper[4183]: E0813 19:44:17.452281 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Aug 13 19:44:18 crc kubenswrapper[4183]: E0813 19:44:18.909575 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": net/http: TLS handshake timeout Aug 13 19:44:20 crc kubenswrapper[4183]: E0813 19:44:20.593140 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:21 crc kubenswrapper[4183]: I0813 19:44:21.170909 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded" start-of-body= Aug 13 19:44:21 crc kubenswrapper[4183]: I0813 19:44:21.171045 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded" Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.208232 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.208402 4183 trace.go:236] Trace[505837227]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (13-Aug-2025 19:44:12.205) (total time: 10002ms): Aug 13 19:44:22 crc kubenswrapper[4183]: Trace[505837227]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:44:22.208) Aug 13 19:44:22 crc kubenswrapper[4183]: Trace[505837227]: [10.002428675s] [10.002428675s] END Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.208424 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.427506 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer" start-of-body= Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.427635 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer" Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.443211 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.443301 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.492631 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: W0813 19:44:22.495898 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: E0813 19:44:22.496042 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.530058 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:22Z is after 2025-06-26T12:47:18Z Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.535586 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.535739 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.581414 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.581995 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.882447 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885166 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" exitCode=255 Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff"} Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.885557 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.887352 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:22 crc kubenswrapper[4183]: I0813 19:44:22.888737 4183 scope.go:117] "RemoveContainer" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.573335 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:23Z is after 2025-06-26T12:47:18Z Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.771285 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.772341 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774445 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.774544 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.811249 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.894466 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.903096 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905088 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:23 crc kubenswrapper[4183]: I0813 19:44:23.905110 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:24 crc kubenswrapper[4183]: E0813 19:44:24.295813 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.453246 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.455919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456074 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.456132 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:24 crc kubenswrapper[4183]: E0813 19:44:24.472356 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.508688 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:24Z is after 2025-06-26T12:47:18Z Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.891416 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.908121 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.910526 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8"} Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.910718 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911904 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911957 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:24 crc kubenswrapper[4183]: I0813 19:44:24.911975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:25 crc kubenswrapper[4183]: E0813 19:44:25.408285 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.512733 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:25Z is after 2025-06-26T12:47:18Z Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.913000 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.913136 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916084 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:25 crc kubenswrapper[4183]: I0813 19:44:25.916168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.185479 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:26 crc kubenswrapper[4183]: W0813 19:44:26.220924 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: E0813 19:44:26.221145 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.508892 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:26Z is after 2025-06-26T12:47:18Z Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.921346 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.923508 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/0.log" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.928912 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" exitCode=255 Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.928964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8"} Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.929010 4183 scope.go:117] "RemoveContainer" containerID="9de5e451cc2d3784d191ca7ee29ddfdd8d4ba15f3a93c605d7c310f6a8f0c5ff" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.929285 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.932302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.933985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.934318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.940734 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:26 crc kubenswrapper[4183]: E0813 19:44:26.943129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:26 crc kubenswrapper[4183]: I0813 19:44:26.953158 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.509157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:27Z is after 2025-06-26T12:47:18Z Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.933897 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.939891 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941421 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941681 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.941908 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:27 crc kubenswrapper[4183]: I0813 19:44:27.943245 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:27 crc kubenswrapper[4183]: E0813 19:44:27.943855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.507271 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:28Z is after 2025-06-26T12:47:18Z Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.945603 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947340 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947415 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.947437 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:28 crc kubenswrapper[4183]: I0813 19:44:28.949265 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:28 crc kubenswrapper[4183]: E0813 19:44:28.949934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:29 crc kubenswrapper[4183]: I0813 19:44:29.510225 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:29Z is after 2025-06-26T12:47:18Z Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.179631 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.179912 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.180009 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.180293 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.184746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.189862 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.190889 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" gracePeriod=30 Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.508175 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:30Z is after 2025-06-26T12:47:18Z Aug 13 19:44:30 crc kubenswrapper[4183]: E0813 19:44:30.598587 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:30Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.957497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/0.log" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958419 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" exitCode=255 Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958502 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c"} Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958532 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.958833 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:30 crc kubenswrapper[4183]: I0813 19:44:30.960085 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:31 crc kubenswrapper[4183]: E0813 19:44:31.300057 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.474098 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475689 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475940 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.475967 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.476003 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:31 crc kubenswrapper[4183]: E0813 19:44:31.479716 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.508445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:31Z is after 2025-06-26T12:47:18Z Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.559283 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.962125 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963607 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963676 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:31 crc kubenswrapper[4183]: I0813 19:44:31.963699 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:32 crc kubenswrapper[4183]: I0813 19:44:32.508713 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:32Z is after 2025-06-26T12:47:18Z Aug 13 19:44:33 crc kubenswrapper[4183]: I0813 19:44:33.507968 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:33Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.509459 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:34Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.891356 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.891730 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893298 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.893407 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.894609 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.895045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:34 crc kubenswrapper[4183]: I0813 19:44:34.956972 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.965734 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:34Z is after 2025-06-26T12:47:18Z Aug 13 19:44:34 crc kubenswrapper[4183]: E0813 19:44:34.965983 4183 certificate_manager.go:440] kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition Aug 13 19:44:35 crc kubenswrapper[4183]: E0813 19:44:35.409388 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:35 crc kubenswrapper[4183]: I0813 19:44:35.507686 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:35Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: I0813 19:44:36.509197 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: W0813 19:44:36.583957 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:36 crc kubenswrapper[4183]: E0813 19:44:36.584065 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:36Z is after 2025-06-26T12:47:18Z Aug 13 19:44:37 crc kubenswrapper[4183]: I0813 19:44:37.507683 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:37Z is after 2025-06-26T12:47:18Z Aug 13 19:44:38 crc kubenswrapper[4183]: E0813 19:44:38.304970 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.480243 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482006 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482036 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482051 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.482077 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:38 crc kubenswrapper[4183]: E0813 19:44:38.486195 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:38 crc kubenswrapper[4183]: I0813 19:44:38.507744 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:38Z is after 2025-06-26T12:47:18Z Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.508194 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:39Z is after 2025-06-26T12:47:18Z Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.580897 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.581127 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582389 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:39 crc kubenswrapper[4183]: I0813 19:44:39.582473 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:40 crc kubenswrapper[4183]: I0813 19:44:40.507720 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:40Z is after 2025-06-26T12:47:18Z Aug 13 19:44:40 crc kubenswrapper[4183]: E0813 19:44:40.603676 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:40Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:41 crc kubenswrapper[4183]: I0813 19:44:41.507445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:41Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.507559 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: W0813 19:44:42.522365 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: E0813 19:44:42.522440 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:42Z is after 2025-06-26T12:47:18Z Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.581872 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:42 crc kubenswrapper[4183]: I0813 19:44:42.582387 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:43 crc kubenswrapper[4183]: I0813 19:44:43.508421 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:43Z is after 2025-06-26T12:47:18Z Aug 13 19:44:44 crc kubenswrapper[4183]: I0813 19:44:44.507425 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:44Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: W0813 19:44:45.280999 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.281599 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.309494 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.410132 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.486592 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.489724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490649 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.490692 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:45 crc kubenswrapper[4183]: E0813 19:44:45.496415 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:45 crc kubenswrapper[4183]: I0813 19:44:45.508552 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:45Z is after 2025-06-26T12:47:18Z Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.352404 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.353013 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354573 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.354587 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:46 crc kubenswrapper[4183]: I0813 19:44:46.507711 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:46Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: W0813 19:44:47.185997 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: E0813 19:44:47.186303 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:47 crc kubenswrapper[4183]: I0813 19:44:47.508005 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:47Z is after 2025-06-26T12:47:18Z Aug 13 19:44:48 crc kubenswrapper[4183]: I0813 19:44:48.530896 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:48Z is after 2025-06-26T12:47:18Z Aug 13 19:44:49 crc kubenswrapper[4183]: I0813 19:44:49.508142 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:49Z is after 2025-06-26T12:47:18Z Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.208245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209728 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.209743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.211129 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:50 crc kubenswrapper[4183]: I0813 19:44:50.508572 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:50Z is after 2025-06-26T12:47:18Z Aug 13 19:44:50 crc kubenswrapper[4183]: E0813 19:44:50.611066 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.030401 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.045562 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98"} Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.046059 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048183 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.048203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:51 crc kubenswrapper[4183]: I0813 19:44:51.510559 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:51Z is after 2025-06-26T12:47:18Z Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.054591 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.055848 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/1.log" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064063 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" exitCode=255 Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064165 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98"} Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064305 4183 scope.go:117] "RemoveContainer" containerID="c827bc1d1e0c62e30b803aa06d0e91a7dc8fda2b967748fd3fae83c74b9028e8" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.064881 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.067529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.070693 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.072699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.319223 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.496694 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498405 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498720 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.498978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.499107 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:52 crc kubenswrapper[4183]: E0813 19:44:52.504188 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.507577 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:52Z is after 2025-06-26T12:47:18Z Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.581562 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:44:52 crc kubenswrapper[4183]: I0813 19:44:52.581752 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:44:53 crc kubenswrapper[4183]: I0813 19:44:53.070983 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:44:53 crc kubenswrapper[4183]: I0813 19:44:53.508312 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:53Z is after 2025-06-26T12:47:18Z Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.508279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:54Z is after 2025-06-26T12:47:18Z Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657538 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657691 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657720 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657741 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.657755 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.891466 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.892106 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.893700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.894037 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.894089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:54 crc kubenswrapper[4183]: I0813 19:44:54.895662 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:54 crc kubenswrapper[4183]: E0813 19:44:54.896216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:55 crc kubenswrapper[4183]: E0813 19:44:55.410662 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:44:55 crc kubenswrapper[4183]: I0813 19:44:55.507525 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:55Z is after 2025-06-26T12:47:18Z Aug 13 19:44:56 crc kubenswrapper[4183]: I0813 19:44:56.508760 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:56Z is after 2025-06-26T12:47:18Z Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.507157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:57Z is after 2025-06-26T12:47:18Z Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.563091 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.563345 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.565501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.565852 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.566000 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:57 crc kubenswrapper[4183]: I0813 19:44:57.571517 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:44:57 crc kubenswrapper[4183]: E0813 19:44:57.572262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:44:58 crc kubenswrapper[4183]: I0813 19:44:58.507190 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:58Z is after 2025-06-26T12:47:18Z Aug 13 19:44:59 crc kubenswrapper[4183]: E0813 19:44:59.326432 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.504460 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506489 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506660 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506694 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.506737 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:44:59 crc kubenswrapper[4183]: I0813 19:44:59.509406 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z Aug 13 19:44:59 crc kubenswrapper[4183]: E0813 19:44:59.512950 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:44:59Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.507961 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:00Z is after 2025-06-26T12:47:18Z Aug 13 19:45:00 crc kubenswrapper[4183]: E0813 19:45:00.615941 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:00Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995163 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:59688->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995291 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:59688->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.995730 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:00 crc kubenswrapper[4183]: I0813 19:45:00.997385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.002082 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.003082 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" gracePeriod=30 Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.100706 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.102983 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/0.log" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106342 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" exitCode=255 Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9"} Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.106447 4183 scope.go:117] "RemoveContainer" containerID="7670de641a29c43088fe5304b3060d152eed7ef9cf7e78cb240a6c54fce1995c" Aug 13 19:45:01 crc kubenswrapper[4183]: I0813 19:45:01.508464 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:01Z is after 2025-06-26T12:47:18Z Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.111742 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.113541 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.113650 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114682 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114738 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.114754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:02 crc kubenswrapper[4183]: I0813 19:45:02.509447 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:02Z is after 2025-06-26T12:47:18Z Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.116281 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117326 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.117394 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:03 crc kubenswrapper[4183]: I0813 19:45:03.508066 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:03Z is after 2025-06-26T12:47:18Z Aug 13 19:45:04 crc kubenswrapper[4183]: I0813 19:45:04.509005 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:04Z is after 2025-06-26T12:47:18Z Aug 13 19:45:05 crc kubenswrapper[4183]: E0813 19:45:05.410927 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:05 crc kubenswrapper[4183]: I0813 19:45:05.509997 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:05Z is after 2025-06-26T12:47:18Z Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.332956 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.507894 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.514149 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.516437 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.520556 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:06 crc kubenswrapper[4183]: I0813 19:45:06.969439 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:45:06 crc kubenswrapper[4183]: E0813 19:45:06.974382 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:06Z is after 2025-06-26T12:47:18Z Aug 13 19:45:07 crc kubenswrapper[4183]: I0813 19:45:07.507969 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:07Z is after 2025-06-26T12:47:18Z Aug 13 19:45:08 crc kubenswrapper[4183]: I0813 19:45:08.508286 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:08Z is after 2025-06-26T12:47:18Z Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.507931 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:09Z is after 2025-06-26T12:47:18Z Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.581036 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.581296 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582950 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:09 crc kubenswrapper[4183]: I0813 19:45:09.582974 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:10 crc kubenswrapper[4183]: I0813 19:45:10.508251 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:10Z is after 2025-06-26T12:47:18Z Aug 13 19:45:10 crc kubenswrapper[4183]: E0813 19:45:10.621077 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:10Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.507141 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:11Z is after 2025-06-26T12:47:18Z Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.558506 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.558664 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560465 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:11 crc kubenswrapper[4183]: I0813 19:45:11.560495 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.209239 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211092 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.211104 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.212843 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:45:12 crc kubenswrapper[4183]: W0813 19:45:12.375543 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: E0813 19:45:12.375667 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.508906 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:12Z is after 2025-06-26T12:47:18Z Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.582036 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:12 crc kubenswrapper[4183]: I0813 19:45:12.582203 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.152957 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.156207 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53"} Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.156392 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157541 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157717 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.157924 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:13 crc kubenswrapper[4183]: E0813 19:45:13.337071 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.508426 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.520646 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522157 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522456 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:13 crc kubenswrapper[4183]: I0813 19:45:13.522603 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:13 crc kubenswrapper[4183]: E0813 19:45:13.528513 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:13Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.161681 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.162518 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/2.log" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.166966 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" exitCode=255 Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167054 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53"} Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167107 4183 scope.go:117] "RemoveContainer" containerID="2e2e57111c702d662b174d77e773e5ea0e244d70bcef09eea07eac62e0f0af98" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.167229 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168632 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.168849 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.170929 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:14 crc kubenswrapper[4183]: E0813 19:45:14.171697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.208869 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210386 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210540 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.210648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.507841 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:14Z is after 2025-06-26T12:47:18Z Aug 13 19:45:14 crc kubenswrapper[4183]: I0813 19:45:14.891288 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.171833 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.174120 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175060 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.175073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.176106 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:15 crc kubenswrapper[4183]: E0813 19:45:15.176437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:15 crc kubenswrapper[4183]: E0813 19:45:15.411865 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:15 crc kubenswrapper[4183]: I0813 19:45:15.507316 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:15Z is after 2025-06-26T12:47:18Z Aug 13 19:45:16 crc kubenswrapper[4183]: I0813 19:45:16.509268 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:16Z is after 2025-06-26T12:47:18Z Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.509667 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:17Z is after 2025-06-26T12:47:18Z Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.563182 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.563484 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.565145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:17 crc kubenswrapper[4183]: I0813 19:45:17.566391 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:17 crc kubenswrapper[4183]: E0813 19:45:17.566892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:18 crc kubenswrapper[4183]: I0813 19:45:18.508241 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:18Z is after 2025-06-26T12:47:18Z Aug 13 19:45:19 crc kubenswrapper[4183]: I0813 19:45:19.511330 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:19Z is after 2025-06-26T12:47:18Z Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.341923 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.508349 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.528918 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:20 crc kubenswrapper[4183]: I0813 19:45:20.530625 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.534200 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:20 crc kubenswrapper[4183]: E0813 19:45:20.627698 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:20Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:21 crc kubenswrapper[4183]: I0813 19:45:21.508311 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:21Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: W0813 19:45:22.431240 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: E0813 19:45:22.431305 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.507124 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:22Z is after 2025-06-26T12:47:18Z Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.580405 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:22 crc kubenswrapper[4183]: I0813 19:45:22.580763 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:23 crc kubenswrapper[4183]: I0813 19:45:23.507832 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:23Z is after 2025-06-26T12:47:18Z Aug 13 19:45:24 crc kubenswrapper[4183]: I0813 19:45:24.509082 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:24Z is after 2025-06-26T12:47:18Z Aug 13 19:45:25 crc kubenswrapper[4183]: E0813 19:45:25.412585 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:25 crc kubenswrapper[4183]: I0813 19:45:25.508881 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:25Z is after 2025-06-26T12:47:18Z Aug 13 19:45:26 crc kubenswrapper[4183]: I0813 19:45:26.507470 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:26Z is after 2025-06-26T12:47:18Z Aug 13 19:45:27 crc kubenswrapper[4183]: E0813 19:45:27.346884 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.510549 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.534700 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540097 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540188 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540208 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:27 crc kubenswrapper[4183]: I0813 19:45:27.540270 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:27 crc kubenswrapper[4183]: E0813 19:45:27.544948 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:27Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:28 crc kubenswrapper[4183]: I0813 19:45:28.507944 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:28Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: W0813 19:45:29.332190 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: E0813 19:45:29.332305 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:29 crc kubenswrapper[4183]: I0813 19:45:29.508640 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:29Z is after 2025-06-26T12:47:18Z Aug 13 19:45:30 crc kubenswrapper[4183]: I0813 19:45:30.507496 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:30Z is after 2025-06-26T12:47:18Z Aug 13 19:45:30 crc kubenswrapper[4183]: E0813 19:45:30.632844 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:30Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.209282 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211543 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211643 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.211664 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.214026 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:31 crc kubenswrapper[4183]: E0813 19:45:31.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.508192 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:31Z is after 2025-06-26T12:47:18Z Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769405 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:50512->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769522 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:50512->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769608 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.769813 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.771861 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.771993 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.772154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.774314 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:45:31 crc kubenswrapper[4183]: I0813 19:45:31.774876 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" gracePeriod=30 Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.248265 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.248965 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/1.log" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250470 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" exitCode=255 Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6"} Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250676 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.250666 4183 scope.go:117] "RemoveContainer" containerID="0f9b09ac6e9dadb007d01c7bbc7bebd022f33438bf5b7327973cb90180aebec9" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.251922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:32 crc kubenswrapper[4183]: I0813 19:45:32.507279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:32Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.259638 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.262592 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264120 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.264143 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:33 crc kubenswrapper[4183]: I0813 19:45:33.508014 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: W0813 19:45:33.705946 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:33 crc kubenswrapper[4183]: E0813 19:45:33.706061 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:33Z is after 2025-06-26T12:47:18Z Aug 13 19:45:34 crc kubenswrapper[4183]: E0813 19:45:34.352501 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.508937 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.545880 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548187 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:34 crc kubenswrapper[4183]: I0813 19:45:34.548219 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:34 crc kubenswrapper[4183]: E0813 19:45:34.552614 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:34Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:35 crc kubenswrapper[4183]: E0813 19:45:35.413709 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:35 crc kubenswrapper[4183]: I0813 19:45:35.507972 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:35Z is after 2025-06-26T12:47:18Z Aug 13 19:45:36 crc kubenswrapper[4183]: I0813 19:45:36.507944 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:36Z is after 2025-06-26T12:47:18Z Aug 13 19:45:37 crc kubenswrapper[4183]: I0813 19:45:37.508249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:37Z is after 2025-06-26T12:47:18Z Aug 13 19:45:38 crc kubenswrapper[4183]: I0813 19:45:38.508206 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:38Z is after 2025-06-26T12:47:18Z Aug 13 19:45:38 crc kubenswrapper[4183]: I0813 19:45:38.969995 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:45:38 crc kubenswrapper[4183]: E0813 19:45:38.976170 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:38Z is after 2025-06-26T12:47:18Z Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.508669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:39Z is after 2025-06-26T12:47:18Z Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.581199 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.581513 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585195 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:39 crc kubenswrapper[4183]: I0813 19:45:39.585274 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:40 crc kubenswrapper[4183]: I0813 19:45:40.507390 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:40Z is after 2025-06-26T12:47:18Z Aug 13 19:45:40 crc kubenswrapper[4183]: E0813 19:45:40.639384 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:40Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:41 crc kubenswrapper[4183]: E0813 19:45:41.357739 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.508453 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.554204 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.556627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.556974 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.557203 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.557428 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.558194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.558607 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559625 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:41 crc kubenswrapper[4183]: I0813 19:45:41.559694 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:41 crc kubenswrapper[4183]: E0813 19:45:41.562659 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:41Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.508395 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:42Z is after 2025-06-26T12:47:18Z Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.582078 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Aug 13 19:45:42 crc kubenswrapper[4183]: I0813 19:45:42.582490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.208292 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.209891 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.209995 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.210016 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.211226 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:43 crc kubenswrapper[4183]: E0813 19:45:43.211633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:43 crc kubenswrapper[4183]: I0813 19:45:43.510590 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:43Z is after 2025-06-26T12:47:18Z Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.209354 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213562 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213650 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.213670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:44 crc kubenswrapper[4183]: I0813 19:45:44.508431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:44Z is after 2025-06-26T12:47:18Z Aug 13 19:45:45 crc kubenswrapper[4183]: E0813 19:45:45.414942 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:45 crc kubenswrapper[4183]: I0813 19:45:45.508706 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:45Z is after 2025-06-26T12:47:18Z Aug 13 19:45:46 crc kubenswrapper[4183]: I0813 19:45:46.507259 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:46Z is after 2025-06-26T12:47:18Z Aug 13 19:45:47 crc kubenswrapper[4183]: I0813 19:45:47.509695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:47Z is after 2025-06-26T12:47:18Z Aug 13 19:45:48 crc kubenswrapper[4183]: E0813 19:45:48.363856 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.508271 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.564016 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567522 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567574 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:48 crc kubenswrapper[4183]: I0813 19:45:48.567632 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:48 crc kubenswrapper[4183]: E0813 19:45:48.572082 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:48Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.208719 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210354 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210508 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.210740 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:49 crc kubenswrapper[4183]: I0813 19:45:49.508264 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:49Z is after 2025-06-26T12:47:18Z Aug 13 19:45:50 crc kubenswrapper[4183]: I0813 19:45:50.508065 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.643361 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.643457 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18e7a3052c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,LastTimestamp:2025-08-13 19:43:54.500547884 +0000 UTC m=+1.193212942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:50 crc kubenswrapper[4183]: E0813 19:45:50.647449 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:50Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:51 crc kubenswrapper[4183]: I0813 19:45:51.509519 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:51Z is after 2025-06-26T12:47:18Z Aug 13 19:45:51 crc kubenswrapper[4183]: E0813 19:45:51.794485 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:51Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.509904 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:52Z is after 2025-06-26T12:47:18Z Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.581821 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:45:52 crc kubenswrapper[4183]: I0813 19:45:52.582173 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:45:53 crc kubenswrapper[4183]: I0813 19:45:53.509729 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:53Z is after 2025-06-26T12:47:18Z Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.508647 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:54Z is after 2025-06-26T12:47:18Z Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659167 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659309 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659344 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659370 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:45:54 crc kubenswrapper[4183]: I0813 19:45:54.659420 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.368548 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.416050 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.507485 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.574137 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:55 crc kubenswrapper[4183]: I0813 19:45:55.576708 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:45:55 crc kubenswrapper[4183]: E0813 19:45:55.580415 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:55Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.209118 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210840 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.210859 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.212460 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:56 crc kubenswrapper[4183]: I0813 19:45:56.510678 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:56Z is after 2025-06-26T12:47:18Z Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.354644 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.357558 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed"} Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.357718 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.358960 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.359026 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.359043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.508048 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:57Z is after 2025-06-26T12:47:18Z Aug 13 19:45:57 crc kubenswrapper[4183]: I0813 19:45:57.563936 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.363040 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.364913 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/3.log" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367439 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" exitCode=255 Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367539 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed"} Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367572 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.367603 4183 scope.go:117] "RemoveContainer" containerID="89ea5c4b7625d1ba9b9cfcf78e2be8cb372cc58135d7587f6df13e0c8e044b53" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369304 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369404 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.369630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.371325 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:45:58 crc kubenswrapper[4183]: E0813 19:45:58.371984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:58 crc kubenswrapper[4183]: I0813 19:45:58.508439 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:58Z is after 2025-06-26T12:47:18Z Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.376302 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.384107 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386120 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.386155 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.388711 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:45:59 crc kubenswrapper[4183]: E0813 19:45:59.389651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:45:59 crc kubenswrapper[4183]: I0813 19:45:59.507063 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:45:59Z is after 2025-06-26T12:47:18Z Aug 13 19:46:00 crc kubenswrapper[4183]: I0813 19:46:00.517885 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:00Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: W0813 19:46:01.348988 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: E0813 19:46:01.349134 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: I0813 19:46:01.507847 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z Aug 13 19:46:01 crc kubenswrapper[4183]: E0813 19:46:01.804456 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:01Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:02 crc kubenswrapper[4183]: E0813 19:46:02.375954 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.511228 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.571763 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:36156->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.571983 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:36156->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.572064 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.572264 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574337 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.574378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.576042 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.576385 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" gracePeriod=30 Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.581620 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584487 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584708 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:02 crc kubenswrapper[4183]: I0813 19:46:02.584834 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:02 crc kubenswrapper[4183]: E0813 19:46:02.595868 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:02Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.399607 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.400721 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/2.log" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.402969 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" exitCode=255 Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403024 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f"} Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403062 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403091 4183 scope.go:117] "RemoveContainer" containerID="dcdf75b3e39eac7c9e0c31f36cbe80951a52cc88109649d9e8c38789aca6bfb6" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.403245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404463 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404582 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.404599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:03 crc kubenswrapper[4183]: I0813 19:46:03.507733 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:03Z is after 2025-06-26T12:47:18Z Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.413221 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.509144 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:04Z is after 2025-06-26T12:47:18Z Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.892034 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.892472 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.894998 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.895184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.895294 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:04 crc kubenswrapper[4183]: I0813 19:46:04.896912 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:04 crc kubenswrapper[4183]: E0813 19:46:04.897399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:05 crc kubenswrapper[4183]: E0813 19:46:05.416222 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:05 crc kubenswrapper[4183]: W0813 19:46:05.449941 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:05 crc kubenswrapper[4183]: E0813 19:46:05.450097 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:05 crc kubenswrapper[4183]: I0813 19:46:05.508913 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:05Z is after 2025-06-26T12:47:18Z Aug 13 19:46:06 crc kubenswrapper[4183]: I0813 19:46:06.510697 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:06Z is after 2025-06-26T12:47:18Z Aug 13 19:46:07 crc kubenswrapper[4183]: I0813 19:46:07.508141 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:07Z is after 2025-06-26T12:47:18Z Aug 13 19:46:08 crc kubenswrapper[4183]: I0813 19:46:08.509106 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:08Z is after 2025-06-26T12:47:18Z Aug 13 19:46:09 crc kubenswrapper[4183]: E0813 19:46:09.380169 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.508176 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.580950 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.581183 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.584743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.585010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.585109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.596742 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598652 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598702 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:09 crc kubenswrapper[4183]: I0813 19:46:09.598745 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:09 crc kubenswrapper[4183]: E0813 19:46:09.605621 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:09Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:10 crc kubenswrapper[4183]: I0813 19:46:10.509770 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:10Z is after 2025-06-26T12:47:18Z Aug 13 19:46:10 crc kubenswrapper[4183]: I0813 19:46:10.969747 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:46:10 crc kubenswrapper[4183]: E0813 19:46:10.975379 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:10Z is after 2025-06-26T12:47:18Z Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.511689 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:11Z is after 2025-06-26T12:47:18Z Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.559714 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.561022 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564287 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:11 crc kubenswrapper[4183]: I0813 19:46:11.564307 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:11 crc kubenswrapper[4183]: E0813 19:46:11.816090 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:11Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.509294 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:12Z is after 2025-06-26T12:47:18Z Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.581260 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:12 crc kubenswrapper[4183]: I0813 19:46:12.581482 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:13 crc kubenswrapper[4183]: I0813 19:46:13.519035 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:13Z is after 2025-06-26T12:47:18Z Aug 13 19:46:14 crc kubenswrapper[4183]: I0813 19:46:14.509354 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:14Z is after 2025-06-26T12:47:18Z Aug 13 19:46:15 crc kubenswrapper[4183]: E0813 19:46:15.416692 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:15 crc kubenswrapper[4183]: I0813 19:46:15.508135 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:15Z is after 2025-06-26T12:47:18Z Aug 13 19:46:16 crc kubenswrapper[4183]: E0813 19:46:16.385964 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.507766 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.606104 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607912 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:16 crc kubenswrapper[4183]: I0813 19:46:16.607953 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:16 crc kubenswrapper[4183]: E0813 19:46:16.612289 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:16Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:17 crc kubenswrapper[4183]: I0813 19:46:17.507760 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:17Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: I0813 19:46:18.509153 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: W0813 19:46:18.734308 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:18 crc kubenswrapper[4183]: E0813 19:46:18.734454 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:18Z is after 2025-06-26T12:47:18Z Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.209340 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.211190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.212634 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:19 crc kubenswrapper[4183]: E0813 19:46:19.213052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:19 crc kubenswrapper[4183]: I0813 19:46:19.513958 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:19Z is after 2025-06-26T12:47:18Z Aug 13 19:46:20 crc kubenswrapper[4183]: I0813 19:46:20.508721 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:20Z is after 2025-06-26T12:47:18Z Aug 13 19:46:21 crc kubenswrapper[4183]: I0813 19:46:21.509911 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:21Z is after 2025-06-26T12:47:18Z Aug 13 19:46:21 crc kubenswrapper[4183]: E0813 19:46:21.820321 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:21Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.508481 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:22Z is after 2025-06-26T12:47:18Z Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.580330 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:22 crc kubenswrapper[4183]: I0813 19:46:22.580470 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:23 crc kubenswrapper[4183]: E0813 19:46:23.390894 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.508225 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.613426 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615406 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615582 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:23 crc kubenswrapper[4183]: I0813 19:46:23.615626 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:23 crc kubenswrapper[4183]: E0813 19:46:23.619335 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:23Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:24 crc kubenswrapper[4183]: I0813 19:46:24.508866 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:24Z is after 2025-06-26T12:47:18Z Aug 13 19:46:25 crc kubenswrapper[4183]: E0813 19:46:25.417160 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:25 crc kubenswrapper[4183]: I0813 19:46:25.508965 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:25Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: W0813 19:46:26.192309 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: E0813 19:46:26.192390 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:26 crc kubenswrapper[4183]: I0813 19:46:26.508890 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:26Z is after 2025-06-26T12:47:18Z Aug 13 19:46:27 crc kubenswrapper[4183]: I0813 19:46:27.508416 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:27Z is after 2025-06-26T12:47:18Z Aug 13 19:46:28 crc kubenswrapper[4183]: I0813 19:46:28.509326 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:28Z is after 2025-06-26T12:47:18Z Aug 13 19:46:29 crc kubenswrapper[4183]: I0813 19:46:29.507732 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:29Z is after 2025-06-26T12:47:18Z Aug 13 19:46:30 crc kubenswrapper[4183]: E0813 19:46:30.396465 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.509171 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.619914 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622079 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622098 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:30 crc kubenswrapper[4183]: I0813 19:46:30.622127 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:30 crc kubenswrapper[4183]: E0813 19:46:30.626393 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:30Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:31 crc kubenswrapper[4183]: I0813 19:46:31.507850 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:31Z is after 2025-06-26T12:47:18Z Aug 13 19:46:31 crc kubenswrapper[4183]: E0813 19:46:31.824915 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.209187 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210595 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.210615 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.212945 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:32 crc kubenswrapper[4183]: E0813 19:46:32.213376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.510109 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:32Z is after 2025-06-26T12:47:18Z Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581386 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581503 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581546 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.581805 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583846 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.583916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.585397 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:46:32 crc kubenswrapper[4183]: I0813 19:46:32.585847 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" gracePeriod=30 Aug 13 19:46:32 crc kubenswrapper[4183]: E0813 19:46:32.750551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.508606 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:33Z is after 2025-06-26T12:47:18Z Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.531882 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.533863 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/3.log" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.536919 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" exitCode=255 Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537005 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a"} Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537060 4183 scope.go:117] "RemoveContainer" containerID="4a09dda3746e6c59af493f2778fdf8195f1e39bbc6699be4e03d0b41c4a15e3f" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.537440 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539432 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.539540 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:33 crc kubenswrapper[4183]: I0813 19:46:33.542224 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:33 crc kubenswrapper[4183]: E0813 19:46:33.543207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:34 crc kubenswrapper[4183]: I0813 19:46:34.511695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:34Z is after 2025-06-26T12:47:18Z Aug 13 19:46:34 crc kubenswrapper[4183]: I0813 19:46:34.542528 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:46:35 crc kubenswrapper[4183]: E0813 19:46:35.417415 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:35 crc kubenswrapper[4183]: I0813 19:46:35.508819 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:35Z is after 2025-06-26T12:47:18Z Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.208887 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210562 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.210609 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:36 crc kubenswrapper[4183]: I0813 19:46:36.508479 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:36Z is after 2025-06-26T12:47:18Z Aug 13 19:46:37 crc kubenswrapper[4183]: E0813 19:46:37.401966 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.509111 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.627700 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630433 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:37 crc kubenswrapper[4183]: I0813 19:46:37.630466 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:37 crc kubenswrapper[4183]: E0813 19:46:37.634557 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:37Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:38 crc kubenswrapper[4183]: I0813 19:46:38.508190 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:38Z is after 2025-06-26T12:47:18Z Aug 13 19:46:39 crc kubenswrapper[4183]: I0813 19:46:39.507942 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:39Z is after 2025-06-26T12:47:18Z Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.508066 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:40Z is after 2025-06-26T12:47:18Z Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.519061 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.519281 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521474 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.521498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:40 crc kubenswrapper[4183]: I0813 19:46:40.523226 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:40 crc kubenswrapper[4183]: E0813 19:46:40.524113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:41 crc kubenswrapper[4183]: I0813 19:46:41.507265 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:41Z is after 2025-06-26T12:47:18Z Aug 13 19:46:41 crc kubenswrapper[4183]: E0813 19:46:41.829421 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:41Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:42 crc kubenswrapper[4183]: I0813 19:46:42.508908 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:42Z is after 2025-06-26T12:47:18Z Aug 13 19:46:42 crc kubenswrapper[4183]: I0813 19:46:42.969557 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:46:42 crc kubenswrapper[4183]: E0813 19:46:42.974395 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:42Z is after 2025-06-26T12:47:18Z Aug 13 19:46:43 crc kubenswrapper[4183]: I0813 19:46:43.507078 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:43Z is after 2025-06-26T12:47:18Z Aug 13 19:46:44 crc kubenswrapper[4183]: E0813 19:46:44.408387 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.508719 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.634877 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636828 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636871 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:44 crc kubenswrapper[4183]: I0813 19:46:44.636915 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:44 crc kubenswrapper[4183]: E0813 19:46:44.640455 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:44Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:45 crc kubenswrapper[4183]: E0813 19:46:45.418298 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:45 crc kubenswrapper[4183]: I0813 19:46:45.508495 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:45Z is after 2025-06-26T12:47:18Z Aug 13 19:46:46 crc kubenswrapper[4183]: I0813 19:46:46.509767 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:46Z is after 2025-06-26T12:47:18Z Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.209002 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.211679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.211988 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.212106 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.217114 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:46:47 crc kubenswrapper[4183]: E0813 19:46:47.218997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:46:47 crc kubenswrapper[4183]: I0813 19:46:47.509395 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:47Z is after 2025-06-26T12:47:18Z Aug 13 19:46:48 crc kubenswrapper[4183]: I0813 19:46:48.509431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:48Z is after 2025-06-26T12:47:18Z Aug 13 19:46:49 crc kubenswrapper[4183]: I0813 19:46:49.509521 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:49Z is after 2025-06-26T12:47:18Z Aug 13 19:46:50 crc kubenswrapper[4183]: I0813 19:46:50.511905 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:50Z is after 2025-06-26T12:47:18Z Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.415323 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.512918 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.640738 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643827 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643923 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643941 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:51 crc kubenswrapper[4183]: I0813 19:46:51.643979 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.648044 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:51 crc kubenswrapper[4183]: E0813 19:46:51.835285 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:51Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:46:52 crc kubenswrapper[4183]: I0813 19:46:52.508157 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:52Z is after 2025-06-26T12:47:18Z Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.209177 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211254 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211362 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.211384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.214540 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:46:53 crc kubenswrapper[4183]: E0813 19:46:53.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:46:53 crc kubenswrapper[4183]: I0813 19:46:53.508249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:53Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.509012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660046 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660276 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660354 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660430 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: I0813 19:46:54.660490 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:46:54 crc kubenswrapper[4183]: W0813 19:46:54.762914 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:54 crc kubenswrapper[4183]: E0813 19:46:54.763075 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:54Z is after 2025-06-26T12:47:18Z Aug 13 19:46:55 crc kubenswrapper[4183]: E0813 19:46:55.419283 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:46:55 crc kubenswrapper[4183]: I0813 19:46:55.507740 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:55Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: W0813 19:46:56.316182 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: E0813 19:46:56.317742 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:56 crc kubenswrapper[4183]: I0813 19:46:56.507468 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:56Z is after 2025-06-26T12:47:18Z Aug 13 19:46:57 crc kubenswrapper[4183]: I0813 19:46:57.510435 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:57Z is after 2025-06-26T12:47:18Z Aug 13 19:46:58 crc kubenswrapper[4183]: E0813 19:46:58.420378 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.510520 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.648586 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650638 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650666 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:46:58 crc kubenswrapper[4183]: I0813 19:46:58.650710 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:46:58 crc kubenswrapper[4183]: E0813 19:46:58.655036 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:58Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:46:59 crc kubenswrapper[4183]: I0813 19:46:59.507745 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:46:59Z is after 2025-06-26T12:47:18Z Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.209201 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.210994 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.211078 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.211095 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.212387 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:00 crc kubenswrapper[4183]: E0813 19:47:00.212844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:00 crc kubenswrapper[4183]: I0813 19:47:00.507343 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:00Z is after 2025-06-26T12:47:18Z Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.209026 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.210969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.211158 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.211204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:01 crc kubenswrapper[4183]: I0813 19:47:01.508677 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:01Z is after 2025-06-26T12:47:18Z Aug 13 19:47:01 crc kubenswrapper[4183]: E0813 19:47:01.841030 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:01Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:02 crc kubenswrapper[4183]: I0813 19:47:02.508683 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:02Z is after 2025-06-26T12:47:18Z Aug 13 19:47:03 crc kubenswrapper[4183]: I0813 19:47:03.508739 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:03Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: W0813 19:47:04.417066 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: E0813 19:47:04.417169 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:04 crc kubenswrapper[4183]: I0813 19:47:04.509200 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:04Z is after 2025-06-26T12:47:18Z Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.208117 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.208129 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209842 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209912 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.209931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210707 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.210721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.211447 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.212145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.419590 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.424203 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.507503 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.655938 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657885 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657904 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:05 crc kubenswrapper[4183]: I0813 19:47:05.657938 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:05 crc kubenswrapper[4183]: E0813 19:47:05.661734 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:05Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:06 crc kubenswrapper[4183]: I0813 19:47:06.507595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:06Z is after 2025-06-26T12:47:18Z Aug 13 19:47:07 crc kubenswrapper[4183]: I0813 19:47:07.508034 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:07Z is after 2025-06-26T12:47:18Z Aug 13 19:47:08 crc kubenswrapper[4183]: I0813 19:47:08.509584 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:08Z is after 2025-06-26T12:47:18Z Aug 13 19:47:09 crc kubenswrapper[4183]: I0813 19:47:09.508409 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:09Z is after 2025-06-26T12:47:18Z Aug 13 19:47:10 crc kubenswrapper[4183]: I0813 19:47:10.508936 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:10Z is after 2025-06-26T12:47:18Z Aug 13 19:47:11 crc kubenswrapper[4183]: I0813 19:47:11.508554 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:11Z is after 2025-06-26T12:47:18Z Aug 13 19:47:11 crc kubenswrapper[4183]: E0813 19:47:11.846977 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:11Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:12 crc kubenswrapper[4183]: E0813 19:47:12.429244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.508767 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.662198 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664207 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664223 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:12 crc kubenswrapper[4183]: I0813 19:47:12.664255 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:12 crc kubenswrapper[4183]: E0813 19:47:12.667699 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:12Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:13 crc kubenswrapper[4183]: I0813 19:47:13.507705 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:13Z is after 2025-06-26T12:47:18Z Aug 13 19:47:14 crc kubenswrapper[4183]: I0813 19:47:14.511515 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:14Z is after 2025-06-26T12:47:18Z Aug 13 19:47:14 crc kubenswrapper[4183]: I0813 19:47:14.969073 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:47:14 crc kubenswrapper[4183]: E0813 19:47:14.974040 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:14Z is after 2025-06-26T12:47:18Z Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.211738 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.214599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.215001 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.216039 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.223661 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:15 crc kubenswrapper[4183]: E0813 19:47:15.224342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:15 crc kubenswrapper[4183]: E0813 19:47:15.419994 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:15 crc kubenswrapper[4183]: I0813 19:47:15.507591 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:15Z is after 2025-06-26T12:47:18Z Aug 13 19:47:16 crc kubenswrapper[4183]: I0813 19:47:16.508495 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:16Z is after 2025-06-26T12:47:18Z Aug 13 19:47:17 crc kubenswrapper[4183]: I0813 19:47:17.507568 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:17Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: W0813 19:47:18.411205 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: E0813 19:47:18.411326 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:18 crc kubenswrapper[4183]: I0813 19:47:18.508359 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:18Z is after 2025-06-26T12:47:18Z Aug 13 19:47:19 crc kubenswrapper[4183]: E0813 19:47:19.433994 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.507416 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.668611 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671895 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671913 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:19 crc kubenswrapper[4183]: I0813 19:47:19.671939 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:19 crc kubenswrapper[4183]: E0813 19:47:19.675885 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:19Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.209458 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212833 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212929 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.212950 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.214992 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.508204 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:20Z is after 2025-06-26T12:47:18Z Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.719999 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.722455 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.722689 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723695 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723877 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:20 crc kubenswrapper[4183]: I0813 19:47:20.723902 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.509064 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:21Z is after 2025-06-26T12:47:18Z Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.558967 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.725590 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727166 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:21 crc kubenswrapper[4183]: I0813 19:47:21.727188 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:21 crc kubenswrapper[4183]: E0813 19:47:21.851710 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:21Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:22 crc kubenswrapper[4183]: I0813 19:47:22.509575 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:22Z is after 2025-06-26T12:47:18Z Aug 13 19:47:23 crc kubenswrapper[4183]: I0813 19:47:23.509622 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:23Z is after 2025-06-26T12:47:18Z Aug 13 19:47:24 crc kubenswrapper[4183]: I0813 19:47:24.508707 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:24Z is after 2025-06-26T12:47:18Z Aug 13 19:47:25 crc kubenswrapper[4183]: E0813 19:47:25.420710 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:25 crc kubenswrapper[4183]: I0813 19:47:25.509082 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:25Z is after 2025-06-26T12:47:18Z Aug 13 19:47:26 crc kubenswrapper[4183]: E0813 19:47:26.438944 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.509324 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.676882 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678202 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:26 crc kubenswrapper[4183]: I0813 19:47:26.678283 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:26 crc kubenswrapper[4183]: E0813 19:47:26.683126 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:26Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:27 crc kubenswrapper[4183]: I0813 19:47:27.508125 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:27Z is after 2025-06-26T12:47:18Z Aug 13 19:47:28 crc kubenswrapper[4183]: I0813 19:47:28.512301 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:28Z is after 2025-06-26T12:47:18Z Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.208320 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210618 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210723 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.210741 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.212256 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.508562 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:29Z is after 2025-06-26T12:47:18Z Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.581158 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.582083 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584193 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584290 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:29 crc kubenswrapper[4183]: I0813 19:47:29.584311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.508950 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:30Z is after 2025-06-26T12:47:18Z Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.759441 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.762178 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.762366 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763392 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:30 crc kubenswrapper[4183]: I0813 19:47:30.763468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.509020 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.768549 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.769851 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/4.log" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772701 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" exitCode=255 Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772760 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerDied","Data":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.772973 4183 scope.go:117] "RemoveContainer" containerID="21bea5e9ace0fbd58622f6ba9a0efdb173b7764b3c538f587b835ba219dcd2ed" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.773033 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.774525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:31 crc kubenswrapper[4183]: I0813 19:47:31.775962 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.776312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.858065 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.858166 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:31 crc kubenswrapper[4183]: E0813 19:47:31.862471 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.510068 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:32Z is after 2025-06-26T12:47:18Z Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.581916 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.582098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:47:32 crc kubenswrapper[4183]: I0813 19:47:32.780905 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:47:33 crc kubenswrapper[4183]: E0813 19:47:33.454745 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.510367 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.685376 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687753 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687831 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687856 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:33 crc kubenswrapper[4183]: I0813 19:47:33.687888 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:33 crc kubenswrapper[4183]: E0813 19:47:33.697290 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:33Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.509279 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:34Z is after 2025-06-26T12:47:18Z Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.891441 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.891615 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893176 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893232 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.893250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:34 crc kubenswrapper[4183]: I0813 19:47:34.895909 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:34 crc kubenswrapper[4183]: E0813 19:47:34.896584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:35 crc kubenswrapper[4183]: E0813 19:47:35.422135 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:35 crc kubenswrapper[4183]: I0813 19:47:35.508730 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:35Z is after 2025-06-26T12:47:18Z Aug 13 19:47:36 crc kubenswrapper[4183]: I0813 19:47:36.507939 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:36Z is after 2025-06-26T12:47:18Z Aug 13 19:47:36 crc kubenswrapper[4183]: E0813 19:47:36.808517 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:36Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.507996 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:37Z is after 2025-06-26T12:47:18Z Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.564200 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.564474 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.565916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.565990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.566009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:37 crc kubenswrapper[4183]: I0813 19:47:37.567289 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:37 crc kubenswrapper[4183]: E0813 19:47:37.567716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:38 crc kubenswrapper[4183]: I0813 19:47:38.509349 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:38Z is after 2025-06-26T12:47:18Z Aug 13 19:47:39 crc kubenswrapper[4183]: I0813 19:47:39.508117 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:39Z is after 2025-06-26T12:47:18Z Aug 13 19:47:40 crc kubenswrapper[4183]: E0813 19:47:40.462748 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.508756 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.698172 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:40 crc kubenswrapper[4183]: I0813 19:47:40.700442 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:40 crc kubenswrapper[4183]: E0813 19:47:40.709132 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:40Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:41 crc kubenswrapper[4183]: I0813 19:47:41.512169 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:41Z is after 2025-06-26T12:47:18Z Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.507757 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:42Z is after 2025-06-26T12:47:18Z Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.582073 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:47:42 crc kubenswrapper[4183]: I0813 19:47:42.582216 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:47:43 crc kubenswrapper[4183]: I0813 19:47:43.508350 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:43Z is after 2025-06-26T12:47:18Z Aug 13 19:47:44 crc kubenswrapper[4183]: I0813 19:47:44.508294 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:44Z is after 2025-06-26T12:47:18Z Aug 13 19:47:45 crc kubenswrapper[4183]: E0813 19:47:45.422727 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:45 crc kubenswrapper[4183]: I0813 19:47:45.509076 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:45Z is after 2025-06-26T12:47:18Z Aug 13 19:47:46 crc kubenswrapper[4183]: I0813 19:47:46.508744 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z Aug 13 19:47:46 crc kubenswrapper[4183]: E0813 19:47:46.812453 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:46 crc kubenswrapper[4183]: I0813 19:47:46.969286 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:47:46 crc kubenswrapper[4183]: E0813 19:47:46.975593 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:46Z is after 2025-06-26T12:47:18Z Aug 13 19:47:47 crc kubenswrapper[4183]: E0813 19:47:47.467519 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.509582 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.709930 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713739 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713967 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.713987 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:47 crc kubenswrapper[4183]: I0813 19:47:47.714020 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:47 crc kubenswrapper[4183]: E0813 19:47:47.718181 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:47Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:48 crc kubenswrapper[4183]: W0813 19:47:48.118499 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:48 crc kubenswrapper[4183]: E0813 19:47:48.118609 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:48 crc kubenswrapper[4183]: I0813 19:47:48.508468 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:48Z is after 2025-06-26T12:47:18Z Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.209234 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.210976 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.211070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.211093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.212341 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:47:49 crc kubenswrapper[4183]: E0813 19:47:49.212814 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:47:49 crc kubenswrapper[4183]: I0813 19:47:49.507056 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.508037 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:50Z is after 2025-06-26T12:47:18Z Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.893941 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:42490->192.168.126.11:10357: read: connection reset by peer" start-of-body= Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894144 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:42490->192.168.126.11:10357: read: connection reset by peer" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894229 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.894387 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896037 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.896164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.898064 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Aug 13 19:47:50 crc kubenswrapper[4183]: I0813 19:47:50.898425 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" gracePeriod=30 Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.022282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:51 crc kubenswrapper[4183]: W0813 19:47:51.416612 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.416762 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.509431 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:51Z is after 2025-06-26T12:47:18Z Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.849917 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.851326 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/4.log" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854231 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" exitCode=255 Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854297 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854357 4183 scope.go:117] "RemoveContainer" containerID="519968a9462e8fe101b32ab89f90f7df5940085d68dc41f9bb8fe6dcd45fe76a" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.854491 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.856167 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:51 crc kubenswrapper[4183]: I0813 19:47:51.857494 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:47:51 crc kubenswrapper[4183]: E0813 19:47:51.859186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:47:52 crc kubenswrapper[4183]: I0813 19:47:52.507851 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:52Z is after 2025-06-26T12:47:18Z Aug 13 19:47:52 crc kubenswrapper[4183]: I0813 19:47:52.859598 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:47:53 crc kubenswrapper[4183]: I0813 19:47:53.508336 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:53 crc kubenswrapper[4183]: W0813 19:47:53.683937 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:53 crc kubenswrapper[4183]: E0813 19:47:53.684046 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:53Z is after 2025-06-26T12:47:18Z Aug 13 19:47:54 crc kubenswrapper[4183]: E0813 19:47:54.472244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.507411 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661003 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661149 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661179 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661211 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.661232 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.719219 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721506 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:47:54 crc kubenswrapper[4183]: I0813 19:47:54.721536 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:47:54 crc kubenswrapper[4183]: E0813 19:47:54.725028 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:54Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:47:55 crc kubenswrapper[4183]: E0813 19:47:55.424009 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:47:55 crc kubenswrapper[4183]: I0813 19:47:55.508465 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:55Z is after 2025-06-26T12:47:18Z Aug 13 19:47:56 crc kubenswrapper[4183]: I0813 19:47:56.509220 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:56Z is after 2025-06-26T12:47:18Z Aug 13 19:47:56 crc kubenswrapper[4183]: E0813 19:47:56.817564 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:56Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:47:57 crc kubenswrapper[4183]: I0813 19:47:57.508461 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:57Z is after 2025-06-26T12:47:18Z Aug 13 19:47:58 crc kubenswrapper[4183]: I0813 19:47:58.508564 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:58Z is after 2025-06-26T12:47:18Z Aug 13 19:47:59 crc kubenswrapper[4183]: I0813 19:47:59.508359 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:59Z is after 2025-06-26T12:47:18Z Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.208959 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.211760 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.507257 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:00Z is after 2025-06-26T12:47:18Z Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.518471 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.518721 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520642 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.520752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:00 crc kubenswrapper[4183]: I0813 19:48:00.522654 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:00 crc kubenswrapper[4183]: E0813 19:48:00.523656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:01 crc kubenswrapper[4183]: E0813 19:48:01.478172 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.507668 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.725214 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727358 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:01 crc kubenswrapper[4183]: I0813 19:48:01.727437 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:01 crc kubenswrapper[4183]: E0813 19:48:01.737482 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:01Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:02 crc kubenswrapper[4183]: I0813 19:48:02.509365 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:02Z is after 2025-06-26T12:47:18Z Aug 13 19:48:03 crc kubenswrapper[4183]: I0813 19:48:03.507856 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:03Z is after 2025-06-26T12:47:18Z Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.208541 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.210268 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.211905 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:04 crc kubenswrapper[4183]: E0813 19:48:04.212462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:04 crc kubenswrapper[4183]: I0813 19:48:04.508972 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:04Z is after 2025-06-26T12:47:18Z Aug 13 19:48:05 crc kubenswrapper[4183]: E0813 19:48:05.424977 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:05 crc kubenswrapper[4183]: I0813 19:48:05.510017 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:05Z is after 2025-06-26T12:47:18Z Aug 13 19:48:06 crc kubenswrapper[4183]: I0813 19:48:06.509015 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:06Z is after 2025-06-26T12:47:18Z Aug 13 19:48:06 crc kubenswrapper[4183]: E0813 19:48:06.823687 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:06Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:07 crc kubenswrapper[4183]: I0813 19:48:07.507939 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:07Z is after 2025-06-26T12:47:18Z Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.208284 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210308 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.210406 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:08 crc kubenswrapper[4183]: E0813 19:48:08.482375 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.529238 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.737626 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739132 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739309 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:08 crc kubenswrapper[4183]: I0813 19:48:08.739419 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:08 crc kubenswrapper[4183]: E0813 19:48:08.742847 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:08Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:09 crc kubenswrapper[4183]: I0813 19:48:09.508101 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:09Z is after 2025-06-26T12:47:18Z Aug 13 19:48:10 crc kubenswrapper[4183]: I0813 19:48:10.509171 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:10Z is after 2025-06-26T12:47:18Z Aug 13 19:48:11 crc kubenswrapper[4183]: I0813 19:48:11.507065 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:11Z is after 2025-06-26T12:47:18Z Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.208022 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209637 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209725 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.209746 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.211524 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:12 crc kubenswrapper[4183]: E0813 19:48:12.212281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:12 crc kubenswrapper[4183]: I0813 19:48:12.508424 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:12Z is after 2025-06-26T12:47:18Z Aug 13 19:48:13 crc kubenswrapper[4183]: I0813 19:48:13.508153 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:13Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: I0813 19:48:14.508084 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: W0813 19:48:14.894124 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:14 crc kubenswrapper[4183]: E0813 19:48:14.894223 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:14Z is after 2025-06-26T12:47:18Z Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.425881 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.486630 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.507913 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.743079 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745850 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:15 crc kubenswrapper[4183]: I0813 19:48:15.745967 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:15 crc kubenswrapper[4183]: E0813 19:48:15.756009 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:15Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:16 crc kubenswrapper[4183]: I0813 19:48:16.507684 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:16Z is after 2025-06-26T12:47:18Z Aug 13 19:48:16 crc kubenswrapper[4183]: E0813 19:48:16.828651 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:17 crc kubenswrapper[4183]: I0813 19:48:17.508695 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:17Z is after 2025-06-26T12:47:18Z Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.208882 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.210322 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.211535 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:18 crc kubenswrapper[4183]: E0813 19:48:18.212120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.509491 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:18Z is after 2025-06-26T12:47:18Z Aug 13 19:48:18 crc kubenswrapper[4183]: I0813 19:48:18.969699 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:48:18 crc kubenswrapper[4183]: E0813 19:48:18.974609 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:18Z is after 2025-06-26T12:47:18Z Aug 13 19:48:19 crc kubenswrapper[4183]: I0813 19:48:19.507337 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:19Z is after 2025-06-26T12:47:18Z Aug 13 19:48:20 crc kubenswrapper[4183]: I0813 19:48:20.509878 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:20Z is after 2025-06-26T12:47:18Z Aug 13 19:48:21 crc kubenswrapper[4183]: I0813 19:48:21.507142 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:21Z is after 2025-06-26T12:47:18Z Aug 13 19:48:22 crc kubenswrapper[4183]: E0813 19:48:22.492982 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.509072 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.756562 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758512 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:22 crc kubenswrapper[4183]: I0813 19:48:22.758553 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:22 crc kubenswrapper[4183]: E0813 19:48:22.762269 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:22Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:23 crc kubenswrapper[4183]: I0813 19:48:23.508701 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:23Z is after 2025-06-26T12:47:18Z Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.208815 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.210373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.216334 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:24 crc kubenswrapper[4183]: E0813 19:48:24.218140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:24 crc kubenswrapper[4183]: I0813 19:48:24.508837 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:24Z is after 2025-06-26T12:47:18Z Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.208831 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210182 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.210202 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:25 crc kubenswrapper[4183]: E0813 19:48:25.427029 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:25 crc kubenswrapper[4183]: I0813 19:48:25.507028 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:25Z is after 2025-06-26T12:47:18Z Aug 13 19:48:26 crc kubenswrapper[4183]: I0813 19:48:26.509146 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:26Z is after 2025-06-26T12:47:18Z Aug 13 19:48:26 crc kubenswrapper[4183]: E0813 19:48:26.834373 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:26Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:27 crc kubenswrapper[4183]: I0813 19:48:27.508562 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:27Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: W0813 19:48:28.188409 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: E0813 19:48:28.188557 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:28 crc kubenswrapper[4183]: I0813 19:48:28.507603 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:28Z is after 2025-06-26T12:47:18Z Aug 13 19:48:29 crc kubenswrapper[4183]: E0813 19:48:29.500911 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.511026 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.762589 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764311 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:29 crc kubenswrapper[4183]: I0813 19:48:29.764341 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:29 crc kubenswrapper[4183]: E0813 19:48:29.768854 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:29Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:30 crc kubenswrapper[4183]: I0813 19:48:30.517188 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:30Z is after 2025-06-26T12:47:18Z Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.209108 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210953 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.210994 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.212398 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:31 crc kubenswrapper[4183]: E0813 19:48:31.212827 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:31 crc kubenswrapper[4183]: I0813 19:48:31.507232 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:31Z is after 2025-06-26T12:47:18Z Aug 13 19:48:32 crc kubenswrapper[4183]: I0813 19:48:32.507707 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:32Z is after 2025-06-26T12:47:18Z Aug 13 19:48:33 crc kubenswrapper[4183]: I0813 19:48:33.508146 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:33Z is after 2025-06-26T12:47:18Z Aug 13 19:48:34 crc kubenswrapper[4183]: I0813 19:48:34.507587 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:34Z is after 2025-06-26T12:47:18Z Aug 13 19:48:35 crc kubenswrapper[4183]: E0813 19:48:35.428027 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:35 crc kubenswrapper[4183]: I0813 19:48:35.507216 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:35Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.505587 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.507713 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: W0813 19:48:36.568675 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.568942 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.769224 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.770923 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.770997 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.771012 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:36 crc kubenswrapper[4183]: I0813 19:48:36.771107 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.778389 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:36 crc kubenswrapper[4183]: E0813 19:48:36.842056 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:36Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.209111 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211424 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211536 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.211552 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.213289 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:37 crc kubenswrapper[4183]: E0813 19:48:37.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:37 crc kubenswrapper[4183]: I0813 19:48:37.507690 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:37Z is after 2025-06-26T12:47:18Z Aug 13 19:48:38 crc kubenswrapper[4183]: I0813 19:48:38.510445 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:38Z is after 2025-06-26T12:47:18Z Aug 13 19:48:39 crc kubenswrapper[4183]: I0813 19:48:39.508593 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:39Z is after 2025-06-26T12:47:18Z Aug 13 19:48:40 crc kubenswrapper[4183]: I0813 19:48:40.509016 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:40Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: I0813 19:48:41.508595 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: W0813 19:48:41.776148 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:41 crc kubenswrapper[4183]: E0813 19:48:41.776301 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:41Z is after 2025-06-26T12:47:18Z Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.208554 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210237 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.210366 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.212399 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:42 crc kubenswrapper[4183]: E0813 19:48:42.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:42 crc kubenswrapper[4183]: I0813 19:48:42.509594 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:42Z is after 2025-06-26T12:47:18Z Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.508017 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z Aug 13 19:48:43 crc kubenswrapper[4183]: E0813 19:48:43.510283 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.779767 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781546 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781625 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:43 crc kubenswrapper[4183]: I0813 19:48:43.781718 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:43 crc kubenswrapper[4183]: E0813 19:48:43.785898 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:43Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:44 crc kubenswrapper[4183]: I0813 19:48:44.508607 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:44Z is after 2025-06-26T12:47:18Z Aug 13 19:48:45 crc kubenswrapper[4183]: E0813 19:48:45.428864 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:45 crc kubenswrapper[4183]: I0813 19:48:45.508371 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:45Z is after 2025-06-26T12:47:18Z Aug 13 19:48:46 crc kubenswrapper[4183]: I0813 19:48:46.507471 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:46Z is after 2025-06-26T12:47:18Z Aug 13 19:48:46 crc kubenswrapper[4183]: E0813 19:48:46.846934 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:46Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:47 crc kubenswrapper[4183]: I0813 19:48:47.507610 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:47Z is after 2025-06-26T12:47:18Z Aug 13 19:48:48 crc kubenswrapper[4183]: I0813 19:48:48.507390 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:48Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: I0813 19:48:49.508906 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: W0813 19:48:49.644590 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:49 crc kubenswrapper[4183]: E0813 19:48:49.644687 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:49Z is after 2025-06-26T12:47:18Z Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.209026 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210881 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.210991 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.212513 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.213372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.508219 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.516201 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.787053 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.788997 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789071 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789090 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.789120 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.792941 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:50 crc kubenswrapper[4183]: I0813 19:48:50.969403 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:48:50 crc kubenswrapper[4183]: E0813 19:48:50.974173 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:50Z is after 2025-06-26T12:47:18Z Aug 13 19:48:51 crc kubenswrapper[4183]: I0813 19:48:51.507882 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:51Z is after 2025-06-26T12:47:18Z Aug 13 19:48:52 crc kubenswrapper[4183]: I0813 19:48:52.508568 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:52Z is after 2025-06-26T12:47:18Z Aug 13 19:48:53 crc kubenswrapper[4183]: I0813 19:48:53.508038 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:53Z is after 2025-06-26T12:47:18Z Aug 13 19:48:54 crc kubenswrapper[4183]: E0813 19:48:54.270502 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:54 crc kubenswrapper[4183]: E0813 19:48:54.288343 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.508609 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:54Z is after 2025-06-26T12:47:18Z Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662493 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662615 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662669 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662703 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:48:54 crc kubenswrapper[4183]: I0813 19:48:54.662726 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.208643 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210367 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.210485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.212012 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.212463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.269841 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:55 crc kubenswrapper[4183]: E0813 19:48:55.429093 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:48:55 crc kubenswrapper[4183]: I0813 19:48:55.507617 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:55Z is after 2025-06-26T12:47:18Z Aug 13 19:48:56 crc kubenswrapper[4183]: E0813 19:48:56.269957 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:56 crc kubenswrapper[4183]: I0813 19:48:56.508306 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:56Z is after 2025-06-26T12:47:18Z Aug 13 19:48:56 crc kubenswrapper[4183]: E0813 19:48:56.851929 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:56Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.270448 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.508358 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.520207 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.793333 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.794972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.795949 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.795969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:48:57 crc kubenswrapper[4183]: I0813 19:48:57.796000 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:48:57 crc kubenswrapper[4183]: E0813 19:48:57.801474 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:57Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:48:58 crc kubenswrapper[4183]: E0813 19:48:58.271041 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:58 crc kubenswrapper[4183]: I0813 19:48:58.508150 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:58Z is after 2025-06-26T12:47:18Z Aug 13 19:48:59 crc kubenswrapper[4183]: E0813 19:48:59.270193 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:48:59 crc kubenswrapper[4183]: I0813 19:48:59.507252 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:48:59Z is after 2025-06-26T12:47:18Z Aug 13 19:49:00 crc kubenswrapper[4183]: E0813 19:49:00.270093 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:00 crc kubenswrapper[4183]: I0813 19:49:00.507642 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:00Z is after 2025-06-26T12:47:18Z Aug 13 19:49:01 crc kubenswrapper[4183]: E0813 19:49:01.270053 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:01 crc kubenswrapper[4183]: I0813 19:49:01.507537 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:01Z is after 2025-06-26T12:47:18Z Aug 13 19:49:02 crc kubenswrapper[4183]: E0813 19:49:02.270147 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:02 crc kubenswrapper[4183]: I0813 19:49:02.509575 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:02Z is after 2025-06-26T12:47:18Z Aug 13 19:49:03 crc kubenswrapper[4183]: E0813 19:49:03.270170 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:03 crc kubenswrapper[4183]: I0813 19:49:03.508092 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:03Z is after 2025-06-26T12:47:18Z Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.208680 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210579 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.210693 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.212286 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.213044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(2eb2b200bca0d10cf0fe16fb7c0caf80)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.270334 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.289000 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.508476 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.525146 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.801907 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803313 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:04 crc kubenswrapper[4183]: I0813 19:49:04.803375 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:04 crc kubenswrapper[4183]: E0813 19:49:04.807056 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:04Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:05 crc kubenswrapper[4183]: E0813 19:49:05.270106 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:05 crc kubenswrapper[4183]: E0813 19:49:05.430194 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:05 crc kubenswrapper[4183]: I0813 19:49:05.507344 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:05Z is after 2025-06-26T12:47:18Z Aug 13 19:49:06 crc kubenswrapper[4183]: E0813 19:49:06.270028 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:06 crc kubenswrapper[4183]: I0813 19:49:06.507669 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:06Z is after 2025-06-26T12:47:18Z Aug 13 19:49:06 crc kubenswrapper[4183]: E0813 19:49:06.858719 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:06Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:07 crc kubenswrapper[4183]: E0813 19:49:07.270036 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:07 crc kubenswrapper[4183]: I0813 19:49:07.507038 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:07Z is after 2025-06-26T12:47:18Z Aug 13 19:49:08 crc kubenswrapper[4183]: E0813 19:49:08.270104 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:08 crc kubenswrapper[4183]: I0813 19:49:08.509900 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:08Z is after 2025-06-26T12:47:18Z Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.208903 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.211430 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.211701 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.212927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:09 crc kubenswrapper[4183]: E0813 19:49:09.270291 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:09 crc kubenswrapper[4183]: I0813 19:49:09.507217 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:09Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.209179 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210653 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210693 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.210705 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.212199 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.212558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.270044 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:10 crc kubenswrapper[4183]: I0813 19:49:10.507652 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: W0813 19:49:10.663149 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:10 crc kubenswrapper[4183]: E0813 19:49:10.663336 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:10Z is after 2025-06-26T12:47:18Z Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.270245 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.507705 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.530195 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.808058 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817246 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817337 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:11 crc kubenswrapper[4183]: I0813 19:49:11.817390 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:11 crc kubenswrapper[4183]: E0813 19:49:11.820833 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:11Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:12 crc kubenswrapper[4183]: E0813 19:49:12.270100 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:12 crc kubenswrapper[4183]: I0813 19:49:12.508425 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:12Z is after 2025-06-26T12:47:18Z Aug 13 19:49:13 crc kubenswrapper[4183]: E0813 19:49:13.270198 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:13 crc kubenswrapper[4183]: I0813 19:49:13.511476 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:13Z is after 2025-06-26T12:47:18Z Aug 13 19:49:14 crc kubenswrapper[4183]: E0813 19:49:14.270548 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:14 crc kubenswrapper[4183]: E0813 19:49:14.289133 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:14 crc kubenswrapper[4183]: I0813 19:49:14.510249 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:14Z is after 2025-06-26T12:47:18Z Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.208334 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209861 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209947 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.209964 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.211520 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 19:49:15 crc kubenswrapper[4183]: E0813 19:49:15.270310 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:15 crc kubenswrapper[4183]: E0813 19:49:15.430490 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:15 crc kubenswrapper[4183]: I0813 19:49:15.509289 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:15Z is after 2025-06-26T12:47:18Z Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.152473 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.154216 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.154441 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.155541 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.270299 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:16 crc kubenswrapper[4183]: I0813 19:49:16.509020 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.868279 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.868752 4183 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:16 crc kubenswrapper[4183]: E0813 19:49:16.874032 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:16Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:17 crc kubenswrapper[4183]: E0813 19:49:17.270118 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:17 crc kubenswrapper[4183]: I0813 19:49:17.508401 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:17Z is after 2025-06-26T12:47:18Z Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.270308 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.509244 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.536924 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.821885 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823832 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823855 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:18 crc kubenswrapper[4183]: I0813 19:49:18.823894 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:18 crc kubenswrapper[4183]: E0813 19:49:18.828073 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:18Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:19 crc kubenswrapper[4183]: E0813 19:49:19.270703 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:19 crc kubenswrapper[4183]: W0813 19:49:19.467073 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: E0813 19:49:19.467173 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.507924 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:19Z is after 2025-06-26T12:47:18Z Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.581401 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.582025 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583832 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:19 crc kubenswrapper[4183]: I0813 19:49:19.583910 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.209155 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210701 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210840 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.210864 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:20 crc kubenswrapper[4183]: E0813 19:49:20.270522 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:20 crc kubenswrapper[4183]: I0813 19:49:20.508890 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:20Z is after 2025-06-26T12:47:18Z Aug 13 19:49:20 crc kubenswrapper[4183]: E0813 19:49:20.708455 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:20Z is after 2025-06-26T12:47:18Z" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:21 crc kubenswrapper[4183]: E0813 19:49:21.269708 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.508993 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:21Z is after 2025-06-26T12:47:18Z Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.558277 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.558456 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561552 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561720 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:21 crc kubenswrapper[4183]: I0813 19:49:21.561906 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.209056 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210374 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210467 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.210483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.211652 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.269829 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.507693 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:22Z is after 2025-06-26T12:47:18Z Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.582060 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.582412 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:49:22 crc kubenswrapper[4183]: I0813 19:49:22.969453 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:49:22 crc kubenswrapper[4183]: E0813 19:49:22.975151 4183 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:22Z is after 2025-06-26T12:47:18Z Aug 13 19:49:23 crc kubenswrapper[4183]: E0813 19:49:23.270008 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:23 crc kubenswrapper[4183]: I0813 19:49:23.507942 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:23Z is after 2025-06-26T12:47:18Z Aug 13 19:49:24 crc kubenswrapper[4183]: E0813 19:49:24.270019 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:24 crc kubenswrapper[4183]: E0813 19:49:24.289754 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:24 crc kubenswrapper[4183]: I0813 19:49:24.507602 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:24Z is after 2025-06-26T12:47:18Z Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.270531 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.431533 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.507442 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.540981 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z" interval="7s" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.828733 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830238 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830323 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:25 crc kubenswrapper[4183]: I0813 19:49:25.830347 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:25 crc kubenswrapper[4183]: E0813 19:49:25.834565 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:25Z is after 2025-06-26T12:47:18Z" node="crc" Aug 13 19:49:26 crc kubenswrapper[4183]: E0813 19:49:26.270401 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:26 crc kubenswrapper[4183]: I0813 19:49:26.507524 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:26Z is after 2025-06-26T12:47:18Z Aug 13 19:49:27 crc kubenswrapper[4183]: E0813 19:49:27.270871 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:27 crc kubenswrapper[4183]: I0813 19:49:27.508099 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:27Z is after 2025-06-26T12:47:18Z Aug 13 19:49:28 crc kubenswrapper[4183]: E0813 19:49:28.270537 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:28 crc kubenswrapper[4183]: I0813 19:49:28.507909 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:28Z is after 2025-06-26T12:47:18Z Aug 13 19:49:29 crc kubenswrapper[4183]: E0813 19:49:29.270255 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:29 crc kubenswrapper[4183]: I0813 19:49:29.507404 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:49:29Z is after 2025-06-26T12:47:18Z Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.270553 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:30 crc kubenswrapper[4183]: I0813 19:49:30.509893 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.715971 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.723222 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:54.85870034 +0000 UTC m=+1.551365198,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.729334 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:54.8587333 +0000 UTC m=+1.551398038,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.735411 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:54.85874733 +0000 UTC m=+1.551411958,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.744178 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.024230731 +0000 UTC m=+1.716895459,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.748454 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.024667024 +0000 UTC m=+1.717331842,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.751936 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.024686724 +0000 UTC m=+1.717351492,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.756567 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b190ee1238d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.158930317 +0000 UTC m=+1.851595035,LastTimestamp:2025-08-13 19:43:55.158930317 +0000 UTC m=+1.851595035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.761713 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.317392991 +0000 UTC m=+2.010058039,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.767268 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.317419641 +0000 UTC m=+2.010084449,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.773494 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.317434591 +0000 UTC m=+2.010099389,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.780170 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.329246191 +0000 UTC m=+2.021910959,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.788362 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.329270591 +0000 UTC m=+2.021935419,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.794122 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.32928957 +0000 UTC m=+2.021954188,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.799561 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.32933991 +0000 UTC m=+2.022004657,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.804277 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.329369089 +0000 UTC m=+2.022033867,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.809238 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.329383399 +0000 UTC m=+2.022048027,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.814081 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.332498119 +0000 UTC m=+2.025162897,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.819425 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.332519098 +0000 UTC m=+2.025183846,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.824567 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.332533998 +0000 UTC m=+2.025198706,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.829662 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80503ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80503ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775405549 +0000 UTC m=+1.468070387,LastTimestamp:2025-08-13 19:43:55.334421288 +0000 UTC m=+2.027086076,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.834495 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80a72b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80a72b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775761593 +0000 UTC m=+1.468426371,LastTimestamp:2025-08-13 19:43:55.334438458 +0000 UTC m=+2.027103186,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.839365 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"crc.185b6b18f80c55b9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.185b6b18f80c55b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:54.775885241 +0000 UTC m=+1.468550049,LastTimestamp:2025-08-13 19:43:55.334449487 +0000 UTC m=+2.027114225,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.845902 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1934520c58 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787086936 +0000 UTC m=+2.479751734,LastTimestamp:2025-08-13 19:43:55.787086936 +0000 UTC m=+2.479751734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.851094 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b193452335e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787096926 +0000 UTC m=+2.479761664,LastTimestamp:2025-08-13 19:43:55.787096926 +0000 UTC m=+2.479761664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.858497 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b193454f3a7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.787277223 +0000 UTC m=+2.479942161,LastTimestamp:2025-08-13 19:43:55.787277223 +0000 UTC m=+2.479942161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.863370 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1934c22012 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.794432018 +0000 UTC m=+2.487096756,LastTimestamp:2025-08-13 19:43:55.794432018 +0000 UTC m=+2.487096756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.868318 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1935677efa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:55.805269754 +0000 UTC m=+2.497934402,LastTimestamp:2025-08-13 19:43:55.805269754 +0000 UTC m=+2.497934402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.873439 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b199886db6b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.468269419 +0000 UTC m=+4.160934207,LastTimestamp:2025-08-13 19:43:57.468269419 +0000 UTC m=+4.160934207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.878613 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1998dd30be openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.473927358 +0000 UTC m=+4.166592086,LastTimestamp:2025-08-13 19:43:57.473927358 +0000 UTC m=+4.166592086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.883898 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b19999cbe50 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.486480976 +0000 UTC m=+4.179145604,LastTimestamp:2025-08-13 19:43:57.486480976 +0000 UTC m=+4.179145604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.889369 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1999c204e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.488923877 +0000 UTC m=+4.181588535,LastTimestamp:2025-08-13 19:43:57.488923877 +0000 UTC m=+4.181588535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.895540 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b199b54a9df openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.515311583 +0000 UTC m=+4.207976331,LastTimestamp:2025-08-13 19:43:57.515311583 +0000 UTC m=+4.207976331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.900880 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b199e67d773 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.566900083 +0000 UTC m=+4.259564721,LastTimestamp:2025-08-13 19:43:57.566900083 +0000 UTC m=+4.259564721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.906976 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b199f3a8cc6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.580709062 +0000 UTC m=+4.273373930,LastTimestamp:2025-08-13 19:43:57.580709062 +0000 UTC m=+4.273373930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.915324 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b199fe9c443 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.592192067 +0000 UTC m=+4.284856765,LastTimestamp:2025-08-13 19:43:57.592192067 +0000 UTC m=+4.284856765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.923950 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19a0082eef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,LastTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.929030 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b19a2a80e70 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.638217328 +0000 UTC m=+4.330882056,LastTimestamp:2025-08-13 19:43:57.638217328 +0000 UTC m=+4.330882056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.935053 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19b35fe1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,LastTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.940670 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba50d163 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,LastTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.946372 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba6c9dae openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.036975022 +0000 UTC m=+4.729639900,LastTimestamp:2025-08-13 19:43:58.036975022 +0000 UTC m=+4.729639900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.953195 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b19c16e2579 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.154515833 +0000 UTC m=+4.847180581,LastTimestamp:2025-08-13 19:43:58.154515833 +0000 UTC m=+4.847180581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.958937 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b19c770630c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.255325964 +0000 UTC m=+4.947990712,LastTimestamp:2025-08-13 19:43:58.255325964 +0000 UTC m=+4.947990712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.965183 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b19c89e5cea openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.275116266 +0000 UTC m=+4.967781174,LastTimestamp:2025-08-13 19:43:58.275116266 +0000 UTC m=+4.967781174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.971982 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b19c998e3fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.291534842 +0000 UTC m=+4.984199570,LastTimestamp:2025-08-13 19:43:58.291534842 +0000 UTC m=+4.984199570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.978918 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b19cb0fb052 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.316097618 +0000 UTC m=+5.008762296,LastTimestamp:2025-08-13 19:43:58.316097618 +0000 UTC m=+5.008762296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.984856 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19e5fef6de openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.767986398 +0000 UTC m=+5.460651056,LastTimestamp:2025-08-13 19:43:58.767986398 +0000 UTC m=+5.460651056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.989255 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19fc142bc3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.138474947 +0000 UTC m=+5.831139825,LastTimestamp:2025-08-13 19:43:59.138474947 +0000 UTC m=+5.831139825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.994025 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19fc3be3f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.141078005 +0000 UTC m=+5.833742753,LastTimestamp:2025-08-13 19:43:59.141078005 +0000 UTC m=+5.833742753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:30 crc kubenswrapper[4183]: E0813 19:49:30.999221 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a20af9846 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.752640582 +0000 UTC m=+6.445305220,LastTimestamp:2025-08-13 19:43:59.752640582 +0000 UTC m=+6.445305220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.005263 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a2538788f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:59.828719759 +0000 UTC m=+6.521384507,LastTimestamp:2025-08-13 19:43:59.828719759 +0000 UTC m=+6.521384507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.010708 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1a33bbabba openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.072199098 +0000 UTC m=+6.764864006,LastTimestamp:2025-08-13 19:44:00.072199098 +0000 UTC m=+6.764864006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.017311 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a352c73be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.09636755 +0000 UTC m=+6.789032298,LastTimestamp:2025-08-13 19:44:00.09636755 +0000 UTC m=+6.789032298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.022588 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a36add0c5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.121622725 +0000 UTC m=+6.814287623,LastTimestamp:2025-08-13 19:44:00.121622725 +0000 UTC m=+6.814287623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.027421 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a36e70dda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.125373914 +0000 UTC m=+6.818038642,LastTimestamp:2025-08-13 19:44:00.125373914 +0000 UTC m=+6.818038642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.032735 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1a38f39204 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.159748612 +0000 UTC m=+6.852413400,LastTimestamp:2025-08-13 19:44:00.159748612 +0000 UTC m=+6.852413400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.038190 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a3973e4ef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.168158447 +0000 UTC m=+6.860823165,LastTimestamp:2025-08-13 19:44:00.168158447 +0000 UTC m=+6.860823165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.054452 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a39869685 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.169383557 +0000 UTC m=+6.862048295,LastTimestamp:2025-08-13 19:44:00.169383557 +0000 UTC m=+6.862048295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.060585 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.185b6b1a3d2acd3c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d3ae206906481b4831fd849b559269c8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.230477116 +0000 UTC m=+6.923141744,LastTimestamp:2025-08-13 19:44:00.230477116 +0000 UTC m=+6.923141744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.066507 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1a3dbdce11 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.240111121 +0000 UTC m=+6.932775859,LastTimestamp:2025-08-13 19:44:00.240111121 +0000 UTC m=+6.932775859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.072140 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a4f719cb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:00.53710764 +0000 UTC m=+7.229772348,LastTimestamp:2025-08-13 19:44:00.53710764 +0000 UTC m=+7.229772348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.078285 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a7478fb6e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.15834763 +0000 UTC m=+7.851012988,LastTimestamp:2025-08-13 19:44:01.15834763 +0000 UTC m=+7.851012988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.089502 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a749b2daa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.160588714 +0000 UTC m=+7.853253362,LastTimestamp:2025-08-13 19:44:01.160588714 +0000 UTC m=+7.853253362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.096173 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a898817aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.511659434 +0000 UTC m=+8.204324172,LastTimestamp:2025-08-13 19:44:01.511659434 +0000 UTC m=+8.204324172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.102249 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1a8a37d37f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.523176319 +0000 UTC m=+8.215840947,LastTimestamp:2025-08-13 19:44:01.523176319 +0000 UTC m=+8.215840947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.108244 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a8bfdc49b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.552925851 +0000 UTC m=+8.245590579,LastTimestamp:2025-08-13 19:44:01.552925851 +0000 UTC m=+8.245590579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.115351 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1a8c18b55e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.554691422 +0000 UTC m=+8.247356050,LastTimestamp:2025-08-13 19:44:01.554691422 +0000 UTC m=+8.247356050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.121877 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1a8c2871a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:01.555722656 +0000 UTC m=+8.248387694,LastTimestamp:2025-08-13 19:44:01.555722656 +0000 UTC m=+8.248387694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.129240 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1ae43f56b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.033618096 +0000 UTC m=+9.726282814,LastTimestamp:2025-08-13 19:44:03.033618096 +0000 UTC m=+9.726282814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.135255 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1ae71d62bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.081724603 +0000 UTC m=+9.774389431,LastTimestamp:2025-08-13 19:44:03.081724603 +0000 UTC m=+9.774389431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.142020 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1aeee82a72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.212454514 +0000 UTC m=+9.905119352,LastTimestamp:2025-08-13 19:44:03.212454514 +0000 UTC m=+9.905119352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.147335 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1aefb94b8e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.226160014 +0000 UTC m=+9.918824642,LastTimestamp:2025-08-13 19:44:03.226160014 +0000 UTC m=+9.918824642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.153455 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1af0961313 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.240629011 +0000 UTC m=+9.933296709,LastTimestamp:2025-08-13 19:44:03.240629011 +0000 UTC m=+9.933296709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.159561 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1af3f4aa7b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.297159803 +0000 UTC m=+9.989824671,LastTimestamp:2025-08-13 19:44:03.297159803 +0000 UTC m=+9.989824671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.165738 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b08a0a410 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.643974672 +0000 UTC m=+10.336639400,LastTimestamp:2025-08-13 19:44:03.643974672 +0000 UTC m=+10.336639400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.172647 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.185b6b1b09844dfa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:631cdb37fbb54e809ecc5e719aebd371,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:03.658894842 +0000 UTC m=+10.351559570,LastTimestamp:2025-08-13 19:44:03.658894842 +0000 UTC m=+10.351559570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.179930 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4a743788 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.74835956 +0000 UTC m=+11.441025118,LastTimestamp:2025-08-13 19:44:04.74835956 +0000 UTC m=+11.441025118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.181609 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b4a769be8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.748516328 +0000 UTC m=+11.441181476,LastTimestamp:2025-08-13 19:44:04.748516328 +0000 UTC m=+11.441181476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.188466 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4f78ce68 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.832546408 +0000 UTC m=+11.525211506,LastTimestamp:2025-08-13 19:44:04.832546408 +0000 UTC m=+11.525211506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.193493 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b4f9e7370 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.835013488 +0000 UTC m=+11.527678176,LastTimestamp:2025-08-13 19:44:04.835013488 +0000 UTC m=+11.527678176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.198940 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b5384199a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.900395418 +0000 UTC m=+11.593060046,LastTimestamp:2025-08-13 19:44:04.900395418 +0000 UTC m=+11.593060046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.205056 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b53c35bb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,LastTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.211243 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b891abecf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.799460559 +0000 UTC m=+12.492125337,LastTimestamp:2025-08-13 19:44:05.799460559 +0000 UTC m=+12.492125337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.216698 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b89221cd6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.799943382 +0000 UTC m=+12.492608170,LastTimestamp:2025-08-13 19:44:05.799943382 +0000 UTC m=+12.492608170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.222906 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b8d621d7a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.871246714 +0000 UTC m=+12.563911562,LastTimestamp:2025-08-13 19:44:05.871246714 +0000 UTC m=+12.563911562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.228245 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b9004b8dd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.915457757 +0000 UTC m=+12.608122415,LastTimestamp:2025-08-13 19:44:05.915457757 +0000 UTC m=+12.608122415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.233893 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1b9025a162 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:05.917614434 +0000 UTC m=+12.610279142,LastTimestamp:2025-08-13 19:44:05.917614434 +0000 UTC m=+12.610279142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.239673 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1bdc2e4fe5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.193251813 +0000 UTC m=+13.885916601,LastTimestamp:2025-08-13 19:44:07.193251813 +0000 UTC m=+13.885916601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.244436 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1be6038a15 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.358220821 +0000 UTC m=+14.050885539,LastTimestamp:2025-08-13 19:44:07.358220821 +0000 UTC m=+14.050885539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.250241 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1be637912f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:07.361630511 +0000 UTC m=+14.054295269,LastTimestamp:2025-08-13 19:44:07.361630511 +0000 UTC m=+14.054295269,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.256487 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1c0fd99e9b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:08.060116635 +0000 UTC m=+14.752781353,LastTimestamp:2025-08-13 19:44:08.060116635 +0000 UTC m=+14.752781353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.261845 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.185b6b1c1834ac80 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:b2a6a3b2ca08062d24afa4c01aaf9e4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:08.200301696 +0000 UTC m=+14.892966424,LastTimestamp:2025-08-13 19:44:08.200301696 +0000 UTC m=+14.892966424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.268266 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.270099 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.273193 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.279406 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f1d51d0e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/healthz": context deadline exceeded Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:21.170999522 +0000 UTC m=+27.863664511,LastTimestamp:2025-08-13 19:44:21.170999522 +0000 UTC m=+27.863664511,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.285865 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f1d52c4f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:21.171062004 +0000 UTC m=+27.863726712,LastTimestamp:2025-08-13 19:44:21.171062004 +0000 UTC m=+27.863726712,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.291293 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6837ed20 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.427594016 +0000 UTC m=+29.120259044,LastTimestamp:2025-08-13 19:44:22.427594016 +0000 UTC m=+29.120259044,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.296244 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6838c787 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44570->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.427649927 +0000 UTC m=+29.120314995,LastTimestamp:2025-08-13 19:44:22.427649927 +0000 UTC m=+29.120314995,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.300958 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6ea889af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Aug 13 19:49:31 crc kubenswrapper[4183]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 19:49:31 crc kubenswrapper[4183]: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.535637423 +0000 UTC m=+29.228302151,LastTimestamp:2025-08-13 19:44:22.535637423 +0000 UTC m=+29.228302151,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.305822 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1f6eaa6926 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:22.535760166 +0000 UTC m=+29.228424934,LastTimestamp:2025-08-13 19:44:22.535760166 +0000 UTC m=+29.228424934,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.311049 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:22.581770237 +0000 UTC m=+29.274586219,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.315857 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d63bae5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:22.582142917 +0000 UTC m=+29.274807915,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.321366 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.185b6b1b53c35bb7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.185b6b1b53c35bb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:53c1db1508241fbac1bedf9130341ffe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:04.904541111 +0000 UTC m=+11.597206259,LastTimestamp:2025-08-13 19:44:22.890986821 +0000 UTC m=+29.583651619,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.328168 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b21364a25ab openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.179861931 +0000 UTC m=+36.872527479,LastTimestamp:2025-08-13 19:44:30.179861931 +0000 UTC m=+36.872527479,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.333579 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b21364b662f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:58646->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.179943983 +0000 UTC m=+36.872609101,LastTimestamp:2025-08-13 19:44:30.179943983 +0000 UTC m=+36.872609101,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.338449 4183 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b2136ee1b84 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:30.190607236 +0000 UTC m=+36.883273024,LastTimestamp:2025-08-13 19:44:30.190607236 +0000 UTC m=+36.883273024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.343715 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19a0082eef\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19a0082eef openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.594185455 +0000 UTC m=+4.286850313,LastTimestamp:2025-08-13 19:44:30.265237637 +0000 UTC m=+36.957902255,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.349819 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19b35fe1a6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19b35fe1a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:57.918699942 +0000 UTC m=+4.611364680,LastTimestamp:2025-08-13 19:44:30.560420379 +0000 UTC m=+37.253085177,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.354916 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b19ba50d163\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b19ba50d163 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:43:58.035153251 +0000 UTC m=+4.727818009,LastTimestamp:2025-08-13 19:44:30.600329758 +0000 UTC m=+37.292994536,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.361362 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:42.58231867 +0000 UTC m=+49.274983458,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.368279 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d63bae5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d63bae5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582238949 +0000 UTC m=+19.274903587,LastTimestamp:2025-08-13 19:44:42.583111371 +0000 UTC m=+49.275776039,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 19:49:31 crc kubenswrapper[4183]: E0813 19:49:31.377404 4183 event.go:346] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.185b6b1d1d6149ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Aug 13 19:49:31 crc kubenswrapper[4183]: &Event{ObjectMeta:{kube-controller-manager-crc.185b6b1d1d6149ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:2eb2b200bca0d10cf0fe16fb7c0caf80,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Aug 13 19:49:31 crc kubenswrapper[4183]: body: Aug 13 19:49:31 crc kubenswrapper[4183]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:44:12.582078975 +0000 UTC m=+19.274743833,LastTimestamp:2025-08-13 19:44:52.581706322 +0000 UTC m=+59.274371120,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Aug 13 19:49:31 crc kubenswrapper[4183]: > Aug 13 19:49:31 crc kubenswrapper[4183]: I0813 19:49:31.512119 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.209040 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.210739 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.210975 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.211129 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.270105 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.510493 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.547606 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.581299 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.581414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.835071 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836842 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836921 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836946 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:32 crc kubenswrapper[4183]: I0813 19:49:32.836979 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:32 crc kubenswrapper[4183]: E0813 19:49:32.842913 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.208703 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209984 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.209999 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.211385 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:33 crc kubenswrapper[4183]: E0813 19:49:33.213154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:33 crc kubenswrapper[4183]: E0813 19:49:33.270083 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:33 crc kubenswrapper[4183]: I0813 19:49:33.513266 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:34 crc kubenswrapper[4183]: E0813 19:49:34.270501 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:34 crc kubenswrapper[4183]: E0813 19:49:34.290122 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:34 crc kubenswrapper[4183]: I0813 19:49:34.511654 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:35 crc kubenswrapper[4183]: E0813 19:49:35.269914 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:35 crc kubenswrapper[4183]: E0813 19:49:35.432366 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:35 crc kubenswrapper[4183]: I0813 19:49:35.509201 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:36 crc kubenswrapper[4183]: E0813 19:49:36.270729 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:36 crc kubenswrapper[4183]: I0813 19:49:36.510235 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: E0813 19:49:37.270369 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:37 crc kubenswrapper[4183]: I0813 19:49:37.511214 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: W0813 19:49:37.988112 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:37 crc kubenswrapper[4183]: E0813 19:49:37.988181 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:38 crc kubenswrapper[4183]: E0813 19:49:38.270227 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:38 crc kubenswrapper[4183]: I0813 19:49:38.516757 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.270570 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.509832 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.555643 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.587743 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.588049 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589501 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.589547 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.594720 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.843881 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845419 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845608 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:39 crc kubenswrapper[4183]: I0813 19:49:39.845971 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:39 crc kubenswrapper[4183]: E0813 19:49:39.853543 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.219245 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220210 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220264 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.220283 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:40 crc kubenswrapper[4183]: E0813 19:49:40.270720 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:40 crc kubenswrapper[4183]: I0813 19:49:40.513496 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: E0813 19:49:41.270528 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:41 crc kubenswrapper[4183]: I0813 19:49:41.511039 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: W0813 19:49:41.624712 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Aug 13 19:49:41 crc kubenswrapper[4183]: E0813 19:49:41.624885 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Aug 13 19:49:42 crc kubenswrapper[4183]: E0813 19:49:42.270521 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:42 crc kubenswrapper[4183]: I0813 19:49:42.510642 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:43 crc kubenswrapper[4183]: E0813 19:49:43.270599 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:43 crc kubenswrapper[4183]: I0813 19:49:43.510273 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:44 crc kubenswrapper[4183]: E0813 19:49:44.270172 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:44 crc kubenswrapper[4183]: E0813 19:49:44.291062 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:44 crc kubenswrapper[4183]: I0813 19:49:44.510192 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:45 crc kubenswrapper[4183]: E0813 19:49:45.270530 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:45 crc kubenswrapper[4183]: E0813 19:49:45.432637 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:45 crc kubenswrapper[4183]: I0813 19:49:45.518078 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.270379 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.509589 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.562571 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.854766 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856664 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856820 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:46 crc kubenswrapper[4183]: I0813 19:49:46.856861 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:46 crc kubenswrapper[4183]: E0813 19:49:46.862298 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.208883 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210220 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210505 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.210528 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.211829 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:49:47 crc kubenswrapper[4183]: E0813 19:49:47.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:49:47 crc kubenswrapper[4183]: E0813 19:49:47.270192 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:47 crc kubenswrapper[4183]: I0813 19:49:47.509999 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:48 crc kubenswrapper[4183]: E0813 19:49:48.270137 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:48 crc kubenswrapper[4183]: I0813 19:49:48.510012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:49 crc kubenswrapper[4183]: E0813 19:49:49.270426 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:49 crc kubenswrapper[4183]: I0813 19:49:49.515265 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:50 crc kubenswrapper[4183]: E0813 19:49:50.271060 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:50 crc kubenswrapper[4183]: I0813 19:49:50.511214 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: W0813 19:49:51.139011 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: E0813 19:49:51.139082 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Aug 13 19:49:51 crc kubenswrapper[4183]: E0813 19:49:51.270920 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:51 crc kubenswrapper[4183]: I0813 19:49:51.512307 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:52 crc kubenswrapper[4183]: E0813 19:49:52.270037 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:52 crc kubenswrapper[4183]: I0813 19:49:52.510453 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.269932 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.510636 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.569575 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.862843 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864628 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864650 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:49:53 crc kubenswrapper[4183]: I0813 19:49:53.864681 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:49:53 crc kubenswrapper[4183]: E0813 19:49:53.870484 4183 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Aug 13 19:49:54 crc kubenswrapper[4183]: E0813 19:49:54.269682 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:54 crc kubenswrapper[4183]: E0813 19:49:54.291339 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.512971 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.663943 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664078 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664111 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664141 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.664185 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.969279 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Aug 13 19:49:54 crc kubenswrapper[4183]: I0813 19:49:54.989173 4183 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:49:55 crc kubenswrapper[4183]: E0813 19:49:55.270095 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:55 crc kubenswrapper[4183]: E0813 19:49:55.433830 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:49:55 crc kubenswrapper[4183]: I0813 19:49:55.510264 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:56 crc kubenswrapper[4183]: E0813 19:49:56.270142 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:56 crc kubenswrapper[4183]: I0813 19:49:56.506012 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: E0813 19:49:57.269926 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:57 crc kubenswrapper[4183]: I0813 19:49:57.541656 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: W0813 19:49:57.811287 4183 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 19:49:57 crc kubenswrapper[4183]: E0813 19:49:57.811355 4183 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Aug 13 19:49:58 crc kubenswrapper[4183]: E0813 19:49:58.271088 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:58 crc kubenswrapper[4183]: I0813 19:49:58.513900 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:59 crc kubenswrapper[4183]: E0813 19:49:59.269943 4183 transport.go:123] "No valid client certificate is found but the server is not responsive. A restart may be necessary to retrieve new initial credentials." lastCertificateAvailabilityTime="2025-08-13 19:43:54.268863766 +0000 UTC m=+0.961528864" shutdownThreshold="5m0s" Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.519147 4183 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.759430 4183 csr.go:261] certificate signing request csr-lhhqv is approved, waiting to be issued Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.783983 4183 csr.go:257] certificate signing request csr-lhhqv is issued Aug 13 19:49:59 crc kubenswrapper[4183]: I0813 19:49:59.877575 4183 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.270621 4183 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.785669 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-03-25 02:29:24.474296861 +0000 UTC Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.786022 4183 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 5358h39m23.688281563s for next certificate rotation Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.870735 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:00 crc kubenswrapper[4183]: I0813 19:50:00.875534 4183 kubelet_node_status.go:77] "Attempting to register node" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.042192 4183 kubelet_node_status.go:116] "Node was previously registered" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.042571 4183 kubelet_node_status.go:80] "Successfully registered node" node="crc" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047273 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047388 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047410 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.047664 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.081841 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089845 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089866 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089888 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.089919 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.111413 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122042 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122222 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122252 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.122285 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.138858 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149201 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149228 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149255 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.149326 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.167689 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192458 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192483 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192513 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.192549 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:01Z","lastTransitionTime":"2025-08-13T19:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205447 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205512 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.205543 4183 kubelet_node_status.go:512] "Error getting the current node from lister" err="node \"crc\" not found" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.208655 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210144 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210216 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.210234 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:01 crc kubenswrapper[4183]: I0813 19:50:01.211710 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.212117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(53c1db1508241fbac1bedf9130341ffe)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" Aug 13 19:50:01 crc kubenswrapper[4183]: E0813 19:50:01.305759 4183 kubelet_node_status.go:506] "Node not becoming ready in time after startup" Aug 13 19:50:05 crc kubenswrapper[4183]: E0813 19:50:05.313867 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:05 crc kubenswrapper[4183]: E0813 19:50:05.434581 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:10 crc kubenswrapper[4183]: E0813 19:50:10.316000 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:10 crc kubenswrapper[4183]: I0813 19:50:10.885620 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212015 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212090 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212107 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212125 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.212160 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.223490 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228330 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228348 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228367 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.228396 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.239346 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244689 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.244966 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.245102 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.257632 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263600 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263666 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.263741 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.275510 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281195 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281599 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:11 crc kubenswrapper[4183]: I0813 19:50:11.281625 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:11Z","lastTransitionTime":"2025-08-13T19:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.294314 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:11 crc kubenswrapper[4183]: E0813 19:50:11.294375 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.208952 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210507 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.210736 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:13 crc kubenswrapper[4183]: I0813 19:50:13.212190 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.208746 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211445 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.211539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.333580 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.337214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"53c1db1508241fbac1bedf9130341ffe","Type":"ContainerStarted","Data":"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92"} Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.337372 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338495 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:14 crc kubenswrapper[4183]: I0813 19:50:14.338517 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:15 crc kubenswrapper[4183]: E0813 19:50:15.318135 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:15 crc kubenswrapper[4183]: E0813 19:50:15.435056 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.564190 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.564518 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566636 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:17 crc kubenswrapper[4183]: I0813 19:50:17.566657 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:20 crc kubenswrapper[4183]: E0813 19:50:20.321676 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421167 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421256 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421273 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421303 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.421367 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.613232 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621172 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621647 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621849 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.621979 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.635751 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641422 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641531 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.641904 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.655538 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661330 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661382 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661451 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661876 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.661905 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.675383 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681015 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681072 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681105 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.681127 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:21Z","lastTransitionTime":"2025-08-13T19:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.695490 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:21 crc kubenswrapper[4183]: E0813 19:50:21.695561 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:21 crc kubenswrapper[4183]: I0813 19:50:21.992377 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:25 crc kubenswrapper[4183]: E0813 19:50:25.324011 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:25 crc kubenswrapper[4183]: E0813 19:50:25.436171 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.570672 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.571207 4183 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573151 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:27 crc kubenswrapper[4183]: I0813 19:50:27.573342 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:29 crc kubenswrapper[4183]: I0813 19:50:29.245026 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:30 crc kubenswrapper[4183]: E0813 19:50:30.326466 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814172 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814253 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.814288 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.829151 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835394 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835413 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.835434 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.847619 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.853860 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854067 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854270 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.854367 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.868884 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877119 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877197 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877216 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.877280 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.891400 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896583 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896662 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:31 crc kubenswrapper[4183]: I0813 19:50:31.896724 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:31Z","lastTransitionTime":"2025-08-13T19:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.909018 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:31 crc kubenswrapper[4183]: E0813 19:50:31.909106 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:35 crc kubenswrapper[4183]: E0813 19:50:35.328375 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:35 crc kubenswrapper[4183]: E0813 19:50:35.437419 4183 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.517928 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.756603 4183 apiserver.go:52] "Watching apiserver" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.776022 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778291 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7","openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw","openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7","openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-zpnhg","openshift-marketplace/certified-operators-7287f","openshift-network-node-identity/network-node-identity-7xghp","openshift-network-operator/network-operator-767c585db5-zd56b","openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh","openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b","openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb","openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz","openshift-etcd-operator/etcd-operator-768d5b5d86-722mg","openshift-ingress/router-default-5c9bf7bc58-6jctv","openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh","openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm","openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m","openshift-authentication/oauth-openshift-765b47f944-n2lhl","openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z","openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-apiserver/apiserver-67cbf64bc9-mtx25","openshift-machine-config-operator/machine-config-server-v65wr","openshift-marketplace/redhat-operators-f4jkp","openshift-dns/dns-default-gbw49","openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd","openshift-dns-operator/dns-operator-75f687757b-nz2xb","openshift-image-registry/image-registry-585546dd8b-v5m4t","openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv","openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc","openshift-multus/multus-admission-controller-6c7c885997-4hbbc","openshift-multus/network-metrics-daemon-qdfr4","openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf","openshift-ovn-kubernetes/ovnkube-node-44qcg","openshift-kube-controller-manager/revision-pruner-8-crc","openshift-image-registry/node-ca-l92hr","openshift-network-operator/iptables-alerter-wwpnd","openshift-service-ca/service-ca-666f99b6f-vlbxv","openshift-console/console-84fccc7b6-mkncc","openshift-controller-manager/controller-manager-6ff78978b4-q4vv8","openshift-marketplace/community-operators-8jhz6","hostpath-provisioner/csi-hostpathplugin-hvm8g","openshift-console/downloads-65476884b9-9wcvx","openshift-marketplace/marketplace-operator-8b455464d-f9xdt","openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5","openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz","openshift-console-operator/console-conversion-webhook-595f9969b-l6z49","openshift-dns/node-resolver-dn27q","openshift-ingress-canary/ingress-canary-2vhcn","openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7","openshift-multus/multus-additional-cni-plugins-bzj2p","openshift-multus/multus-q88th","openshift-network-diagnostics/network-check-target-v54bt","openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9","openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg","openshift-etcd/etcd-crc","openshift-marketplace/redhat-marketplace-8s8pc","openshift-marketplace/redhat-marketplace-rmwfn","openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr","openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2","openshift-console-operator/console-operator-5dbbc74dc9-cp5cd"] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778476 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" podNamespace="openshift-etcd-operator" podName="etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778870 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.778954 4183 topology_manager.go:215] "Topology Admit Handler" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" podNamespace="openshift-machine-config-operator" podName="machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779016 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" podNamespace="openshift-service-ca-operator" podName="service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779108 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" podNamespace="openshift-operator-lifecycle-manager" podName="catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779177 4183 topology_manager.go:215] "Topology Admit Handler" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" podNamespace="openshift-operator-lifecycle-manager" podName="package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779239 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" podNamespace="openshift-kube-apiserver-operator" podName="kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779463 4183 topology_manager.go:215] "Topology Admit Handler" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" podNamespace="openshift-machine-api" podName="machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.779873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.779920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.780352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780375 4183 topology_manager.go:215] "Topology Admit Handler" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" podNamespace="openshift-network-operator" podName="network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780584 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" podNamespace="openshift-operator-lifecycle-manager" podName="olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.780935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.781417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781191 4183 topology_manager.go:215] "Topology Admit Handler" podUID="71af81a9-7d43-49b2-9287-c375900aa905" podNamespace="openshift-kube-scheduler-operator" podName="openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.781258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.781953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782210 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" podNamespace="openshift-kube-controller-manager-operator" podName="kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.782757 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" podNamespace="openshift-kube-storage-version-migrator-operator" podName="kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.783203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783420 4183 topology_manager.go:215] "Topology Admit Handler" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" podNamespace="openshift-machine-api" podName="control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.783663 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" podNamespace="openshift-authentication-operator" podName="authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.784462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.784737 4183 topology_manager.go:215] "Topology Admit Handler" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" podNamespace="openshift-config-operator" podName="openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785160 4183 topology_manager.go:215] "Topology Admit Handler" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" podNamespace="openshift-apiserver-operator" podName="openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.785639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.785336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.785713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786231 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786269 4183 topology_manager.go:215] "Topology Admit Handler" podUID="10603adc-d495-423c-9459-4caa405960bb" podNamespace="openshift-dns-operator" podName="dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.786749 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" podNamespace="openshift-controller-manager-operator" podName="openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787193 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" podNamespace="openshift-image-registry" podName="cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.787564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787367 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" podNamespace="openshift-multus" podName="multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787875 4183 topology_manager.go:215] "Topology Admit Handler" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" podNamespace="openshift-multus" podName="multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788101 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" podNamespace="openshift-multus" podName="network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788490 4183 topology_manager.go:215] "Topology Admit Handler" podUID="410cf605-1970-4691-9c95-53fdc123b1f3" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.788736 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" podNamespace="openshift-network-diagnostics" podName="network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.790616 4183 topology_manager.go:215] "Topology Admit Handler" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" podNamespace="openshift-network-diagnostics" podName="network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.791383 4183 topology_manager.go:215] "Topology Admit Handler" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" podNamespace="openshift-network-node-identity" podName="network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792215 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.792420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.798866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.793065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.799077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.787459 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.793268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.799527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794364 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2b6d14a5-ca00-40c7-af7a-051a98a24eed" podNamespace="openshift-network-operator" podName="iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.800281 4183 topology_manager.go:215] "Topology Admit Handler" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" podNamespace="openshift-kube-storage-version-migrator" podName="migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794555 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.794704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.795116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.811906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.812489 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.812676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813056 4183 topology_manager.go:215] "Topology Admit Handler" podUID="378552fd-5e53-4882-87ff-95f3d9198861" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813646 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.814482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.814668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.813932 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.816490 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.816766 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.820457 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.820702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821071 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821437 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.821974 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.822161 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.822350 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.814377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.823768 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6a23c0ee-5648-448c-b772-83dced2891ce" podNamespace="openshift-dns" podName="node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.823996 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824160 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824236 4183 topology_manager.go:215] "Topology Admit Handler" podUID="13045510-8717-4a71-ade4-be95a76440a7" podNamespace="openshift-dns" podName="dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824337 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824564 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824876 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.824900 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825235 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9fb762d1-812f-43f1-9eac-68034c1ecec7" podNamespace="openshift-cluster-version" podName="cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.825452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825508 4183 topology_manager.go:215] "Topology Admit Handler" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" podNamespace="openshift-oauth-apiserver" podName="apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.825892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.826256 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" podNamespace="openshift-operator-lifecycle-manager" podName="packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.826588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.826923 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" podNamespace="openshift-ingress-operator" podName="ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827276 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" podNamespace="openshift-cluster-samples-operator" podName="cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827581 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" podNamespace="openshift-cluster-machine-approver" podName="machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.827734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.827954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.828070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828349 4183 topology_manager.go:215] "Topology Admit Handler" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" podNamespace="openshift-ingress" podName="router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.828586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.828739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829150 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" podNamespace="openshift-machine-config-operator" podName="machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829931 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830195 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830220 4183 topology_manager.go:215] "Topology Admit Handler" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" podNamespace="openshift-console-operator" podName="console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.829370 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.830751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831073 4183 topology_manager.go:215] "Topology Admit Handler" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" podNamespace="openshift-console-operator" podName="console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831430 4183 topology_manager.go:215] "Topology Admit Handler" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" podNamespace="openshift-machine-config-operator" podName="machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.831702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.831130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.831956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832299 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6268b7fe-8910-4505-b404-6f1df638105c" podNamespace="openshift-console" podName="downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832628 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bf1a8b70-3856-486f-9912-a2de1d57c3fb" podNamespace="openshift-machine-config-operator" podName="machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.832975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.832721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.833167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.833551 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" podNamespace="openshift-image-registry" podName="node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834086 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" podNamespace="openshift-ingress-canary" podName="ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834596 4183 topology_manager.go:215] "Topology Admit Handler" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" podNamespace="openshift-multus" podName="multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.834759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835213 4183 topology_manager.go:215] "Topology Admit Handler" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" podNamespace="hostpath-provisioner" podName="csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.835477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.835878 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" podNamespace="openshift-image-registry" podName="image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.836253 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" podNamespace="openshift-console" podName="console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.836600 4183 topology_manager.go:215] "Topology Admit Handler" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837070 4183 topology_manager.go:215] "Topology Admit Handler" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" podNamespace="openshift-apiserver" podName="apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837483 4183 topology_manager.go:215] "Topology Admit Handler" podUID="13ad7555-5f28-4555-a563-892713a8433a" podNamespace="openshift-authentication" podName="oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.837759 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838100 4183 topology_manager.go:215] "Topology Admit Handler" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" podNamespace="openshift-controller-manager" podName="controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838255 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838630 4183 topology_manager.go:215] "Topology Admit Handler" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" podNamespace="openshift-marketplace" podName="certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838756 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839190 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839427 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" podNamespace="openshift-marketplace" podName="community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839941 4183 topology_manager.go:215] "Topology Admit Handler" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" podNamespace="openshift-marketplace" podName="redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840167 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840286 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838200 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.839898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840548 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.840698 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841006 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841374 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.841606 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.841929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842007 4183 topology_manager.go:215] "Topology Admit Handler" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" podNamespace="openshift-marketplace" podName="redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842497 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.842525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842616 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842723 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.843163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.843357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.838713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.843931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.844006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842107 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.842081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.840689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.844632 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.844737 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.844653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845071 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845436 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" podNamespace="openshift-marketplace" podName="redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845496 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.845887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.845968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845355 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846052 4183 topology_manager.go:215] "Topology Admit Handler" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.845455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.846376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.846621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.863009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.880047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.897584 4183 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898734 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898908 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898972 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.898996 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899018 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899090 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899118 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899139 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899219 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899248 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899303 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899328 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899401 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899509 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899572 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899632 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899682 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899711 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899762 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899864 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899897 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899924 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.899976 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900032 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900057 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900080 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900131 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900291 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900518 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900596 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900650 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900747 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.900898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901044 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901077 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.901399 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.901756 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902158 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902212 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902295 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902424 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.402238295 +0000 UTC m=+407.094902923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902508 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.402486162 +0000 UTC m=+407.095150820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902630 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.902742 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903086 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903454 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903594 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903871 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.903991 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904087 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904148 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904321 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.904399 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.905258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.905873 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906263 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906408 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.906756 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.909890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.909949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910096 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910190 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910486 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910537 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910570 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910602 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910625 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910648 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910730 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910765 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910854 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910881 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.910925 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911294 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911316 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911337 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.911358 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914042 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914106 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914135 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914158 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914233 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914300 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914331 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914354 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914509 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914642 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914756 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914902 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914948 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.914974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915003 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915083 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915114 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915162 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915186 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915235 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915261 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.915365 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.921592 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.902543 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.922372 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906261 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.927437 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.906577 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.927612 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42762797 +0000 UTC m=+407.120292559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427661471 +0000 UTC m=+407.120326149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927698 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427690392 +0000 UTC m=+407.120354980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.927716 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.427709523 +0000 UTC m=+407.120374121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928118 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428093264 +0000 UTC m=+407.120757962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928223 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428211547 +0000 UTC m=+407.120876145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42830978 +0000 UTC m=+407.120974448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428400983 +0000 UTC m=+407.121065581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428493755 +0000 UTC m=+407.121158343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928585 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428575108 +0000 UTC m=+407.121239696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928710 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428699471 +0000 UTC m=+407.121364059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.928927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.428905857 +0000 UTC m=+407.121570575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429137544 +0000 UTC m=+407.121802252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929261 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429249537 +0000 UTC m=+407.121914125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.42935045 +0000 UTC m=+407.122015058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.929458 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.429448262 +0000 UTC m=+407.122112851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.930252 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.930440 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.930734 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.933915 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.934582 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935125 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.935163 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435201617 +0000 UTC m=+407.127866355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935281 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435329601 +0000 UTC m=+407.127994339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935391 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935440 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435432574 +0000 UTC m=+407.128097302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935723 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935938 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.435926588 +0000 UTC m=+407.128591236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936011 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936088 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436057161 +0000 UTC m=+407.128721889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936130 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936164 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436154044 +0000 UTC m=+407.128818772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936341 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436395861 +0000 UTC m=+407.129060499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.935288 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936593 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436583686 +0000 UTC m=+407.129248304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936642 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.936682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.436673389 +0000 UTC m=+407.129338017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.939937 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.940023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.440010504 +0000 UTC m=+407.132675122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.940080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.440068636 +0000 UTC m=+407.132733254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941239 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941642 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.941769 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942016 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942135 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942318 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942424 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942515 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942606 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.942693 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.944945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945063 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945152 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945403 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945480 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945534 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945566 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945592 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.945675 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946078 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.446204751 +0000 UTC m=+407.138869529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946387 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.44649172 +0000 UTC m=+407.139156338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946719 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946963 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947095 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947176 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947282 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947340 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.447387085 +0000 UTC m=+407.140051813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947411 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.947923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948181 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948506 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948604 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.948696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.948996 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949269 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949314 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949377 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949387 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949400 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.949887 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.949954 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950021 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950070 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950128 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.950326 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950369 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950902 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.950965 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951022 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951026 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.951061 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951346 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951364 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951382 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.951577 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.952003 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.952069 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954085 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954197 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954239 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.954630 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.955715 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956034 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956247 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.956378 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956593 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.456561177 +0000 UTC m=+407.149225965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956740 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.956761 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.957096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.457077892 +0000 UTC m=+407.149742580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.957331 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.945240 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.957879 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.958726 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.958958 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964044 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964223 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.964344 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981001 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.480932494 +0000 UTC m=+407.173597112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981078 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481054117 +0000 UTC m=+407.173718835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481090728 +0000 UTC m=+407.173755326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981135 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481119049 +0000 UTC m=+407.173783647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.48114988 +0000 UTC m=+407.173814468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481170981 +0000 UTC m=+407.173835579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981201 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481193371 +0000 UTC m=+407.173858079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481211322 +0000 UTC m=+407.173876030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981240 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481230432 +0000 UTC m=+407.173895030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481256113 +0000 UTC m=+407.173920821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481274004 +0000 UTC m=+407.173938772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981302 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481293824 +0000 UTC m=+407.173958422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481314785 +0000 UTC m=+407.173979383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981342 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481333585 +0000 UTC m=+407.173998283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981369 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481353386 +0000 UTC m=+407.174018164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481387827 +0000 UTC m=+407.174052615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.48148407 +0000 UTC m=+407.174148758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481523171 +0000 UTC m=+407.174187759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981540 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481543491 +0000 UTC m=+407.174208179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481574512 +0000 UTC m=+407.174239400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481596023 +0000 UTC m=+407.174260751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981659 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.981704 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.946976 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981881 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981933 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.481912432 +0000 UTC m=+407.174577170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981932 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947533 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.981981 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982001 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982028 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.482009175 +0000 UTC m=+407.174673943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.982062 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.482044396 +0000 UTC m=+407.174709194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982119 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982174 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982225 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982375 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982443 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982505 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982549 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982850 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982907 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982942 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.982975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983007 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983035 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983093 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983135 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983195 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983241 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983312 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983370 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983401 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.983464 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.947642 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.986704 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.986915 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.944961 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.988040 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988586 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988727 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.988902 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989071 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989156 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.989230 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.989388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.995425 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995917 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995939 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.995958 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.946154 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997174 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997195 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997217 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997338 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997418 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997432 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997579 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997622 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:39 crc kubenswrapper[4183]: E0813 19:50:39.997640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997923 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.997966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998053 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998155 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998191 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998341 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998365 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998397 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998524 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998598 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998631 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:39 crc kubenswrapper[4183]: I0813 19:50:39.998751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003069 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003535 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003614 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003114 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.003184 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004323 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.004671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.007460 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.507495343 +0000 UTC m=+407.200160031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512153866 +0000 UTC m=+407.204818564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512192467 +0000 UTC m=+407.204857165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512211868 +0000 UTC m=+407.204876586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512235249 +0000 UTC m=+407.204900067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012277 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.512261349 +0000 UTC m=+407.204926047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.012295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.5122873 +0000 UTC m=+407.204951998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012938 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.012985 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.013737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015081 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.015740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.016143 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.016602 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.016871 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.018521 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.019119 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.020741 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.021678 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.021971 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.022410 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023291 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.516762998 +0000 UTC m=+407.209427816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.023677 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.523662685 +0000 UTC m=+407.216327283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.026600 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.027004 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.029345 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.029578 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.030344 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.031228 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.031653 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007909 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.038765 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007958 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007988 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008075 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008656 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.008885 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.009447 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.009483 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007588 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032241 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.032259 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.032332 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032367 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.032386 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.026884 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.523689636 +0000 UTC m=+407.216354334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039489 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539465937 +0000 UTC m=+407.232130645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539502438 +0000 UTC m=+407.232167036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039528 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539521218 +0000 UTC m=+407.232185816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539537429 +0000 UTC m=+407.232202017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039565 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.53955768 +0000 UTC m=+407.232222278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039587 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.5395749 +0000 UTC m=+407.232239598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039612 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539599681 +0000 UTC m=+407.232264279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539619691 +0000 UTC m=+407.232284279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039646 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539638172 +0000 UTC m=+407.232302830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539657612 +0000 UTC m=+407.232322200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039679 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539671033 +0000 UTC m=+407.232335631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539686603 +0000 UTC m=+407.232351191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039709 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539703274 +0000 UTC m=+407.232367872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539721844 +0000 UTC m=+407.232386432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039749 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539743465 +0000 UTC m=+407.232408073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039767 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539761385 +0000 UTC m=+407.232426093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.039995 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.539974411 +0000 UTC m=+407.232639019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540013773 +0000 UTC m=+407.232678481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540036533 +0000 UTC m=+407.232701121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040056 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540050534 +0000 UTC m=+407.232715122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.033427 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040101 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540094455 +0000 UTC m=+407.232759073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.033626 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.033959 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.034014 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040177 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540170457 +0000 UTC m=+407.232835175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.034115 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540210798 +0000 UTC m=+407.232875486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.035141 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.035194 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.54027947 +0000 UTC m=+407.232944088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.035334 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540328002 +0000 UTC m=+407.232992620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040459 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040473 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.040510 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.540499606 +0000 UTC m=+407.233164214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041073 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041122 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041137 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041219 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.541186746 +0000 UTC m=+407.233851444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041359 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041440 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041449 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.541499335 +0000 UTC m=+407.234164093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041594 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041611 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.041689 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.54168032 +0000 UTC m=+407.234345138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.041931 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.042124 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.042173 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.542155684 +0000 UTC m=+407.234820392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.007619 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059388 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.559359415 +0000 UTC m=+407.252024053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.013478 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.060348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.560333093 +0000 UTC m=+407.252997891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.013312 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058596 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.561312631 +0000 UTC m=+407.253977259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061504 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061342 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.059226 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059265 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.059304 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061420 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061715 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061727 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.061740 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058708 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.062594 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.058765 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.064511 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066485 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066499 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065179 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065330 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065490 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.065585 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.561543078 +0000 UTC m=+407.254207706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.56653355 +0000 UTC m=+407.259198258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566558681 +0000 UTC m=+407.259223269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066583 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566576462 +0000 UTC m=+407.259241060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066599 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566593442 +0000 UTC m=+407.259258180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566609253 +0000 UTC m=+407.259273951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566623623 +0000 UTC m=+407.259288241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566642104 +0000 UTC m=+407.259306722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066664 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566658264 +0000 UTC m=+407.259322862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066679 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566674144 +0000 UTC m=+407.259338743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566687595 +0000 UTC m=+407.259352193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.066755 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.566750137 +0000 UTC m=+407.259414725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569214847 +0000 UTC m=+407.261879465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069310 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569295549 +0000 UTC m=+407.261960157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569342581 +0000 UTC m=+407.262007189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569385832 +0000 UTC m=+407.262050440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.069427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.569421753 +0000 UTC m=+407.262086371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070518 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070560 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070592 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070625 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.570616207 +0000 UTC m=+407.263280825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070711 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070725 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070733 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.070759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.570751461 +0000 UTC m=+407.263416079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.070879 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.071762 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.072578 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.073055 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.082900 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083099 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.583389592 +0000 UTC m=+407.276054330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.073269 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.073889 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.074001 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.075109 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086579 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083145 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.083009 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.085385 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085453 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085571 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085625 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085662 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.085730 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.085874 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086334 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086382 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089156 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089372 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089723 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.589694212 +0000 UTC m=+407.282359040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089740 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090358 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089758 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089771 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090892 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089855 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090934 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089923 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090975 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089957 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091011 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089966 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091109 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089981 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091145 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.089996 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091210 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.086652 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091255 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.090673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59065947 +0000 UTC m=+407.283324208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591298398 +0000 UTC m=+407.283962996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591327129 +0000 UTC m=+407.283991717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091352 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59134372 +0000 UTC m=+407.284008428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59135985 +0000 UTC m=+407.284024438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091380 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.59137304 +0000 UTC m=+407.284037638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091396 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591390511 +0000 UTC m=+407.284055219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591405241 +0000 UTC m=+407.284069839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091426 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591419782 +0000 UTC m=+407.284084370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.091442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.591433582 +0000 UTC m=+407.284098170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093285 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093382 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.093667 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.094314 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.094496 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.095386 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.096028 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.097052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.097620 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.097913 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.098147 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.098447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.598416612 +0000 UTC m=+407.291081300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100236 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100358 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100446 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.100562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.600549023 +0000 UTC m=+407.293213641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.104282 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.105100 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.105922 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.106114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109218 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109303 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109319 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.109399 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.609377205 +0000 UTC m=+407.302041823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117310 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117394 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117472 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117534 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117585 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.117875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118005 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118381 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118646 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118766 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.118885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119507 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119907 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.119965 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120064 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120117 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120230 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120374 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120395 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120417 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121024 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121115 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121328 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121373 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.121851 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122055 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120701 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120932 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120936 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120956 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.120969 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122565 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122724 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.122551 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.125983 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.126546 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.126735 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127133 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127269 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130843 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130563 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.127698 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.128566 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.130422 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131232 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131442 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.131675 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132135 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132170 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132163 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132204 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132744 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132909 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134085 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.134637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.132938 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.135971 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136029 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136102 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136117 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.136618 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.136632 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.136711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.636685536 +0000 UTC m=+407.329350374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.139030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.139129 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.140384 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.140426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142102 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.142871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143111 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143001 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143590 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.143754 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144419 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144900 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.144629 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.145027 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.145886 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.160910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.170534 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.175171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.182246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.187836 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.196391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197400 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197445 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197465 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.197537 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.697515744 +0000 UTC m=+407.390180472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.203657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.221286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.237889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.253051 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.254531 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.268483 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod410cf605_1970_4691_9c95_53fdc123b1f3.slice/crio-5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e WatchSource:0}: Error finding container 5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e: Status 404 returned error can't find the container with id 5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.279875 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280084 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280199 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.280339 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.780315891 +0000 UTC m=+407.472980619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296267 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296334 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.296430 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:40.79640468 +0000 UTC m=+407.489069408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.298240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.339918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.341980 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.367083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.375130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.396203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.419432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.428216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.428929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.454329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.454950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455129 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.455162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455563 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455613 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455667 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.455638061 +0000 UTC m=+408.148302839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455645 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455697 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455706 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.455685313 +0000 UTC m=+408.148350061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.455763 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.458383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.458365089 +0000 UTC m=+408.151029697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459226 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459358 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459406 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459505 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459578 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459615 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459648 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459683 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459716 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.459961 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460015 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460050 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460107 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460171 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.460282 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460420198 +0000 UTC m=+408.153084796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460464 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460456719 +0000 UTC m=+408.153121307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460522 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460553902 +0000 UTC m=+408.153218520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460611 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460632514 +0000 UTC m=+408.153297132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460692 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460696 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461516 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461749 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.461956 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462026 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462389 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462410 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462422 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.462710 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463178 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463411 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463496 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463562 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463636 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463727 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463927 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.463948 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464018 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464090 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.464170 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460764 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.460727 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.460713906 +0000 UTC m=+408.153378524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.467622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.467600923 +0000 UTC m=+408.160265521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.467661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.468376 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469257 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469291952 +0000 UTC m=+408.161956570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469323893 +0000 UTC m=+408.161988491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469516 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469504128 +0000 UTC m=+408.162168726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469528268 +0000 UTC m=+408.162193716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469557289 +0000 UTC m=+408.162221887 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469586 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.46957859 +0000 UTC m=+408.162243188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469757 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469738414 +0000 UTC m=+408.162403012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469837 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469768885 +0000 UTC m=+408.162433483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.469890 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.469852168 +0000 UTC m=+408.162516846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.470911 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.47100016 +0000 UTC m=+408.163664788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471278 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.471356171 +0000 UTC m=+408.164020769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471769 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.471755912 +0000 UTC m=+408.164420610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471530 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.471565 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.473353 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.473339137 +0000 UTC m=+408.166003755 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476701 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476679103 +0000 UTC m=+408.169343841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476714784 +0000 UTC m=+408.169379372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476733884 +0000 UTC m=+408.169398482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476756 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476750155 +0000 UTC m=+408.169414753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476873 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476764075 +0000 UTC m=+408.169428663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476920 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476908999 +0000 UTC m=+408.169573587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476936 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.47692974 +0000 UTC m=+408.169594448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476952801 +0000 UTC m=+408.169617389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.476981 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.476969741 +0000 UTC m=+408.169634329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477047 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477080434 +0000 UTC m=+408.169745292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477122 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477180 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.477220 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477358 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477382853 +0000 UTC m=+408.170047471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477447 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.477472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.477465965 +0000 UTC m=+408.170130573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.489692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.523155 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"13eba7880abbfbef1344a579dab2a0b19cce315561153e251e3263ed0687b3e7"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.523402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.548115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.593376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9572cbf27a025e52f8350ba1f90df2f73ac013d88644e34f555a7ae71822234\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:23:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:07Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.597211 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"221a24b0d917be98aa8fdfcfe9dbbefc5cd678c5dd905ae1ce5de6a160842882"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610644 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.610768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611237 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611320121 +0000 UTC m=+408.303984859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611377 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611400 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611428 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611451 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611465 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611463675 +0000 UTC m=+408.304128403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611501356 +0000 UTC m=+408.304165954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611560 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611592309 +0000 UTC m=+408.304257037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611646 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.611661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611766 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.611668531 +0000 UTC m=+408.304333209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612124 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.612225 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612287 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612310 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612332 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61232174 +0000 UTC m=+408.304986358 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61233993 +0000 UTC m=+408.305004558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611862 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612369 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612383 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612414 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612406382 +0000 UTC m=+408.305070990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612386 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612442863 +0000 UTC m=+408.305107581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611906 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612473 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612485 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612523 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612515185 +0000 UTC m=+408.305179913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611929 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612555 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612547486 +0000 UTC m=+408.305212094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611939 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612577 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611968 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611977 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612627 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612635 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612576787 +0000 UTC m=+408.305241395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612668 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612661629 +0000 UTC m=+408.305326227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61267664 +0000 UTC m=+408.305341228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.61269076 +0000 UTC m=+408.305355348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611995 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612721631 +0000 UTC m=+408.305386489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.611895 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.612763 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.612757312 +0000 UTC m=+408.305421930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624197 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624285 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624301 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624598 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.6245759 +0000 UTC m=+408.317240508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.624674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624873 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.624916 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.624902479 +0000 UTC m=+408.317567087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.624954 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625186 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625201 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625461 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.625442705 +0000 UTC m=+408.318107313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625686 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.625842 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.625771574 +0000 UTC m=+408.318492644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625870 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.625940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626040 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.626071212 +0000 UTC m=+408.318735830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626327 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626382 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.626369941 +0000 UTC m=+408.319034559 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.626550 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.626884 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627156 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.627138403 +0000 UTC m=+408.319803111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.627190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.627273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627418 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.627715 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.627700459 +0000 UTC m=+408.320365287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.634892 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.635391 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.635379579 +0000 UTC m=+408.328044317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635525 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.635974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636077 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636256 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.636477 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637440 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637549 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637587 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637656 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.637626603 +0000 UTC m=+408.330291271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.637762 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638749 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638767 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638871 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638931 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.638898609 +0000 UTC m=+408.331563447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642307307 +0000 UTC m=+408.334972075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638000 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642396 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642385259 +0000 UTC m=+408.335049937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638343 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642446 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642467 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642518783 +0000 UTC m=+408.335183461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638426 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.642590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.642579444 +0000 UTC m=+408.335244132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.637768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642726 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642944 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.642993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643201 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643458 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.643898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644219 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644254 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644430 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644465 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644620 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.644682 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652375 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652421 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652698 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.652734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653506 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653545 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.653857 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.664754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667430 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667539 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667560 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.66762997 +0000 UTC m=+408.360294598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.638983 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.667740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.667720143 +0000 UTC m=+408.360384861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669223 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669389 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669424 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669507 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.669483293 +0000 UTC m=+408.362147991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669624 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.669657688 +0000 UTC m=+408.362322306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669957 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.669979 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670224 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670275 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.670259045 +0000 UTC m=+408.362923784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670360 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.670603 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.670591155 +0000 UTC m=+408.363256073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671119 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671139 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671306 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671187692 +0000 UTC m=+408.363852400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671532 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671576703 +0000 UTC m=+408.364241381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671604 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671664 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671651935 +0000 UTC m=+408.364316653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671692 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671736518 +0000 UTC m=+408.364401816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671754 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.671999 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.671988625 +0000 UTC m=+408.364653243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672531 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672704 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.672687545 +0000 UTC m=+408.365352173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672716 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.672756287 +0000 UTC m=+408.365420895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639242 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.672971 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673014 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673033 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673081226 +0000 UTC m=+408.365745884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673173 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67321258 +0000 UTC m=+408.365877198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673248731 +0000 UTC m=+408.365913349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639324 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673286 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673316633 +0000 UTC m=+408.365981241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673408 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673426 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673437 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673478917 +0000 UTC m=+408.366143535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673539 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673578 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67356302 +0000 UTC m=+408.366227628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673643 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.673683063 +0000 UTC m=+408.366347691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.673757 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674115 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674094425 +0000 UTC m=+408.366759153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674173 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674220 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674205408 +0000 UTC m=+408.366870036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674282 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674322 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674309221 +0000 UTC m=+408.366973829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674398 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674421 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674465796 +0000 UTC m=+408.367130424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674534 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674551678 +0000 UTC m=+408.367216396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674648 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674664 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674677 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.674713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.674704263 +0000 UTC m=+408.367368891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675003 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675019 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675032 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675066653 +0000 UTC m=+408.367731371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675151 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675165 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675173 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675210 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675198537 +0000 UTC m=+408.367863255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675272 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675312 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675294489 +0000 UTC m=+408.367959107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675388 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675404 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675422 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675466 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675448834 +0000 UTC m=+408.368113632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675554 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675569 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675577 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.675615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.675600998 +0000 UTC m=+408.368265726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676118663 +0000 UTC m=+408.368783501 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676208 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676222 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676230 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676282 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676273397 +0000 UTC m=+408.368938015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639039 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676322999 +0000 UTC m=+408.368987687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639667 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676374 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.67636126 +0000 UTC m=+408.369025868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639693 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676422 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676411631 +0000 UTC m=+408.369076239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639744 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676454 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676497444 +0000 UTC m=+408.369162182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640267 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676553685 +0000 UTC m=+408.369218303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640674 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676608 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676595417 +0000 UTC m=+408.369260025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.640955 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676653 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676637398 +0000 UTC m=+408.369302016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641007 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.676699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.676688709 +0000 UTC m=+408.369353317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641252 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677108 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677093321 +0000 UTC m=+408.369757929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641305 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677151 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677144412 +0000 UTC m=+408.369809130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.641344 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677342 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677364 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677498012 +0000 UTC m=+408.370162740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677625 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677668 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677648977 +0000 UTC m=+408.370313595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677624 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.677717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.677706218 +0000 UTC m=+408.370370906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678032 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678071 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678225 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678378 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678399 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.678414 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.680703 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682398 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682423 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.682440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.687249 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.688482 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"41d80ed1b6b3289201cf615c5e532a96845a5c98c79088b67161733f63882539"} Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.688504 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.688567 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689052 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689161 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689177 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689186 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689288 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689303 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689312 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689418 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689432 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689440 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689530 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689543 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689552 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.689626 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.690089 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694406 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.678068569 +0000 UTC m=+408.370733357 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694516299 +0000 UTC m=+408.387180897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694542109 +0000 UTC m=+408.387206697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.69456639 +0000 UTC m=+408.387230978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694730915 +0000 UTC m=+408.387395613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.694759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694747315 +0000 UTC m=+408.387412013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.639365 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.699059 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.702968 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.699630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709018 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.694766776 +0000 UTC m=+408.387431374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.70926622 +0000 UTC m=+408.401930828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709495977 +0000 UTC m=+408.402160775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709550 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709529308 +0000 UTC m=+408.402193966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.709577 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.709565979 +0000 UTC m=+408.402230657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714227782 +0000 UTC m=+408.406892390 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.71450534 +0000 UTC m=+408.407169938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714548 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714531291 +0000 UTC m=+408.407196129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.714580 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.714570242 +0000 UTC m=+408.407234840 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.714958 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715468 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.715519 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715737 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.715874899 +0000 UTC m=+408.408539737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715975 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.715989 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716030 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716022723 +0000 UTC m=+408.408687341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716072 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716051654 +0000 UTC m=+408.408716242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716131 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716190428 +0000 UTC m=+408.408855046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716287 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716313 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716336 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716388 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.716376064 +0000 UTC m=+408.409040762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.716477 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719348 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719369 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.71940179 +0000 UTC m=+408.412066418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719496 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719514 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719522 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719549504 +0000 UTC m=+408.412214122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719639 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719663 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719672 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719718 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719706979 +0000 UTC m=+408.412371597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719877 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.719924 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.719909655 +0000 UTC m=+408.412574383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.757513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.764569 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"9bb711518b1fc4ac72f4ad05c59c2bd3bc932c94879c31183df088652e4ed2c3"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.790268 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"815c16566f290b783ea9eced9544573db3088d99a58cb4d87a1fd8ab2b69614e"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.797291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.810977 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb762d1_812f_43f1_9eac_68034c1ecec7.slice/crio-44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c WatchSource:0}: Error finding container 44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c: Status 404 returned error can't find the container with id 44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.822586 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"5716d33776fee1b3bfd908d86257b9ae48c1c339a2b3cc6d4177c4c9b6ba094e"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.833599 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.833751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.834230 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834607 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834649 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834664 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.834708246 +0000 UTC m=+408.527372974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834866 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834883 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834892 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.834913601 +0000 UTC m=+408.527578409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834978 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.834988 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: E0813 19:50:40.835013 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:41.835004674 +0000 UTC m=+408.527669292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.837241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.849250 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"e76d945a8cb210681a40e3f9356115ebf38b8c8873e7d7a82afbf363f496a845"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.873331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.888954 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"807117e45707932fb04c35eb8f8cd7233e9fecc547b5e6d3e81e84b6f57d09af"} Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.900523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.927267 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"e4abca68aabfc809ca21711270325e201599e8b85acaf41371638a0414333adf"} Aug 13 19:50:40 crc kubenswrapper[4183]: W0813 19:50:40.932948 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a23c0ee_5648_448c_b772_83dced2891ce.slice/crio-7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f WatchSource:0}: Error finding container 7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f: Status 404 returned error can't find the container with id 7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.933327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.960512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:40 crc kubenswrapper[4183]: I0813 19:50:40.978547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.029130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.057008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.105141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.140646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.198939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.215139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.216639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.227623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.235938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.236976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.237876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.238657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.239927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.240038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.240170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.219665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.220525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.249990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.250536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.251216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.252950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.256207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.300629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.315526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.334712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.378702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: W0813 19:50:41.431345 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc291782_27d2_4a74_af79_c7dcb31535d2.slice/crio-8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4 WatchSource:0}: Error finding container 8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4: Status 404 returned error can't find the container with id 8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4 Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.464467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473207 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473312 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473293997 +0000 UTC m=+410.165958705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473454 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.473484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473627 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473660538 +0000 UTC m=+410.166325246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473663 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473715 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.47373438 +0000 UTC m=+410.166398998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.473866 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.473765421 +0000 UTC m=+410.166430119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474724 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474885 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474937 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.474923424 +0000 UTC m=+410.167588172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.474971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.474988 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475018606 +0000 UTC m=+410.167683294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475055 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475090 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475080758 +0000 UTC m=+410.167745376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.47512656 +0000 UTC m=+410.167791298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475235 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475389 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475503 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475540571 +0000 UTC m=+410.168205349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475584 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475587 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475592 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475614 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475614703 +0000 UTC m=+410.168279391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475632 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475656 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475669 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475662325 +0000 UTC m=+410.168327093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475687716 +0000 UTC m=+410.168352494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475703 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475716 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475706626 +0000 UTC m=+410.168371284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475720 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.475616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475723857 +0000 UTC m=+410.168388455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475745 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.475737937 +0000 UTC m=+410.168402635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475509 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475879 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.475766 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482521 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.482594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482725 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482866 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.482770288 +0000 UTC m=+410.175434906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482930 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.482951963 +0000 UTC m=+410.175616581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482992 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483000085 +0000 UTC m=+410.175664723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483014 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483029485 +0000 UTC m=+410.175694083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483043 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483054 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483046476 +0000 UTC m=+410.175711064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.482991 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483074 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483062736 +0000 UTC m=+410.175727414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483092237 +0000 UTC m=+410.175756865 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483113728 +0000 UTC m=+410.175778356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483145469 +0000 UTC m=+410.175810157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483313 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.483355 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.483346034 +0000 UTC m=+410.176010642 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.532082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.564233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.584270 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.584384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.584368602 +0000 UTC m=+410.277033220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.584097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.584529 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585072 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585285 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585561 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585653 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.585643078 +0000 UTC m=+410.278307816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585879 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.585899 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.585890855 +0000 UTC m=+410.278555453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.585980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586173 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.586248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.586644 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588029 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588496 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588555 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.588628 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.597944 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.58676001 +0000 UTC m=+410.279427598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598293 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598272999 +0000 UTC m=+410.290937717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598391513 +0000 UTC m=+410.291056211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598431534 +0000 UTC m=+410.291096192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598457 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598450934 +0000 UTC m=+410.291115522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.598472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.598465835 +0000 UTC m=+410.291130423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.610340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.656593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.687893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688018 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688110 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688194 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688223 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688254 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688280 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688314 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688350 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688614 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688664 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.688746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689203 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689316 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689344 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689462 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689713 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689851 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689887 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689910 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689937 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.689966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690123 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690128 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690167 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690184 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690170346 +0000 UTC m=+410.382834964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690323 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690378 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690411 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690402152 +0000 UTC m=+410.383066880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690450 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690509 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690522 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690524 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690532 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690552977 +0000 UTC m=+410.383217655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690593 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690605 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690629299 +0000 UTC m=+410.383294037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690667 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690680 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690688 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.690704621 +0000 UTC m=+410.383369349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.690688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712639 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712754 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712940 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.712980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713042 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713077 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713113 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713166 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.713888 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714150 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71447942 +0000 UTC m=+410.407144068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.703566 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714544292 +0000 UTC m=+410.407208900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705091 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714593884 +0000 UTC m=+410.407258612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705144 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714632695 +0000 UTC m=+410.407297303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705183 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714682 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714674726 +0000 UTC m=+410.407339334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705281 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714719 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714739 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714917 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714769609 +0000 UTC m=+410.407434287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705329 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.714971 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.714957694 +0000 UTC m=+410.407622322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705387 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715010 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715002725 +0000 UTC m=+410.407667343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705450 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715033 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715043 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715071 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715065347 +0000 UTC m=+410.407729965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705499 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715089 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715103 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715147 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715136119 +0000 UTC m=+410.407800947 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705898 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715168 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715180 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715205 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715197771 +0000 UTC m=+410.407862389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705954 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715244502 +0000 UTC m=+410.407909310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.705993 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715284993 +0000 UTC m=+410.407949611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706313 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715323 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715351785 +0000 UTC m=+410.408016403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706371 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715377 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715393497 +0000 UTC m=+410.408058115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706420 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715456 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715439978 +0000 UTC m=+410.408104646 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706515 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715499 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715485659 +0000 UTC m=+410.408150317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706554 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715539221 +0000 UTC m=+410.408203899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706588 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715595002 +0000 UTC m=+410.408259680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706679 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715642164 +0000 UTC m=+410.408306782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706725 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715687055 +0000 UTC m=+410.408351723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.706761 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.715731526 +0000 UTC m=+410.408396204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715931 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715974 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.715988 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716016 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716027 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716002084 +0000 UTC m=+410.408666832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716033 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716078396 +0000 UTC m=+410.408743094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716174 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71620303 +0000 UTC m=+410.408867638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716266 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716296222 +0000 UTC m=+410.408960840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716360 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716419 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.716399095 +0000 UTC m=+410.409063783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716502 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716517 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716536 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71656071 +0000 UTC m=+410.409225338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717416 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717468 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717457456 +0000 UTC m=+410.410122084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717527 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717571 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717561238 +0000 UTC m=+410.410225856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717625 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717649561 +0000 UTC m=+410.410314179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717711 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717754 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717744934 +0000 UTC m=+410.410409562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717921 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.717946559 +0000 UTC m=+410.410611188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.717999 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718035 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718026932 +0000 UTC m=+410.410691550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718082 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718105784 +0000 UTC m=+410.410770392 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718167 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718188206 +0000 UTC m=+410.410852814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718250 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718289 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718280859 +0000 UTC m=+410.410945487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718331 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718368 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718355621 +0000 UTC m=+410.411020239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718428 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718446 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718460 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718480835 +0000 UTC m=+410.411145453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718555 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.718582918 +0000 UTC m=+410.411247526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718642 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.71866425 +0000 UTC m=+410.411328868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718731 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718747 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.718759 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.716666 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.735182 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.735145851 +0000 UTC m=+410.427810479 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690340 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737022 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737118 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737087597 +0000 UTC m=+410.429752215 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737248 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737283132 +0000 UTC m=+410.429947750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737451 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.737494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.737477488 +0000 UTC m=+410.430142156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.738586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739061 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739120 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739101254 +0000 UTC m=+410.431765942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739199 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739231228 +0000 UTC m=+410.431895916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739307 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739347 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739338121 +0000 UTC m=+410.432002739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739402 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739452 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739437684 +0000 UTC m=+410.432102632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739549 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739569 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739581 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739641 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739626979 +0000 UTC m=+410.432291877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.739722 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748345 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748656 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.748993 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.749963 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750098 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750337 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750452 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750547 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.750722 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751004 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751197 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751296 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751441 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.751891 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.690463 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752368 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752335 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752417 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752510 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752532 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752615 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752659 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.752689 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753277 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753568 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.753876 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754000 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754208 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.754650 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755005 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755084 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755102 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755199 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755282 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755356 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755438 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755524 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755641 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755658 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755670 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755847 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755868 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755878 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755957 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755973 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.755992 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756083 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756096 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756104 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756517 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756591 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756608 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.757937 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.756534 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.762895 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.739765183 +0000 UTC m=+410.432429871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772576971 +0000 UTC m=+410.465241569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772623 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772615092 +0000 UTC m=+410.465279680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772648 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772634573 +0000 UTC m=+410.465299171 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772665973 +0000 UTC m=+410.465330571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772686474 +0000 UTC m=+410.465351072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.772724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.772706715 +0000 UTC m=+410.465371313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773183 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773166568 +0000 UTC m=+410.465831176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773211 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773197619 +0000 UTC m=+410.465862767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773244 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773227879 +0000 UTC m=+410.465892537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773267 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.77325492 +0000 UTC m=+410.465919588 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773283 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773275991 +0000 UTC m=+410.465940579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773302 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773291351 +0000 UTC m=+410.465955939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773310462 +0000 UTC m=+410.465975170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773341 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773325992 +0000 UTC m=+410.465990590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773363 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773351593 +0000 UTC m=+410.466016191 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773381 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773373204 +0000 UTC m=+410.466037792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773391064 +0000 UTC m=+410.466055662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773424 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773415815 +0000 UTC m=+410.466080413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773442 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773434505 +0000 UTC m=+410.466099093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773450346 +0000 UTC m=+410.466114944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773486 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773477737 +0000 UTC m=+410.466142335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773501 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773493747 +0000 UTC m=+410.466158585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773524 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773509958 +0000 UTC m=+410.466174656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773545 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773538918 +0000 UTC m=+410.466203506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773554819 +0000 UTC m=+410.466219527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773584 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773570469 +0000 UTC m=+410.466235057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773584 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773607 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.77360082 +0000 UTC m=+410.466265408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773628 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773620201 +0000 UTC m=+410.466284799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773648 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773637351 +0000 UTC m=+410.466301939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773655842 +0000 UTC m=+410.466320440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773678 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773671472 +0000 UTC m=+410.466336070 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773700 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773686623 +0000 UTC m=+410.466351391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.773726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.773715643 +0000 UTC m=+410.466380241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.775512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.816971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817039 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817074 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817136 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817286 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817340 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.817375 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.822371 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.822540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.822520858 +0000 UTC m=+410.515185666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823111 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827029 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827017917 +0000 UTC m=+410.519682545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823314 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827060 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827080 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827104519 +0000 UTC m=+410.519769127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823368 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827146 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.82713893 +0000 UTC m=+410.519803548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823422 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827164 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827186582 +0000 UTC m=+410.519851190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823486 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827211 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827224 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827246873 +0000 UTC m=+410.519911481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823527 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827274 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827298 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827328276 +0000 UTC m=+410.519992884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823577 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827361 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827372 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827392608 +0000 UTC m=+410.520057226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.823611 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827475 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.82746769 +0000 UTC m=+410.520132368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.823960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.824019 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827643 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827664485 +0000 UTC m=+410.520329103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.824606 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.827714 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.827705556 +0000 UTC m=+410.520370174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.828028 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.828078 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.834861 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836083 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.836070246 +0000 UTC m=+410.528734984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.835018 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836549 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836639 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.836863 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.836757605 +0000 UTC m=+410.529422303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.900416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.929755 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.929893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.930475 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932632 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932688 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932702 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932873 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932894 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932930 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.932914523 +0000 UTC m=+410.625579141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.932993 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933008 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933016 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.933033907 +0000 UTC m=+410.625698525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: E0813 19:50:41.933284 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:43.933273204 +0000 UTC m=+410.625937902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:41 crc kubenswrapper[4183]: I0813 19:50:41.980329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:41.999954 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.001483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.013623 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"7bbc561a16cc9a56f4d08fa72e19c57f5c5cdb54ee1a9b77e752effc42fb180f"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.022652 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"7f52ab4d1ec6be2d7d4c2b684f75557c65a5b3424d556a21053e8abd54d2afd9"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.037563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"55fa820b6afd0d7cad1d37a4f84deed3f0ce4495af292cdacc5f97f75e79113b"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.044591 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"44d24fb11db7ae2742519239309e3254a495fb0556d8e82e16f4cb9c4b64108c"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.038442 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045279 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.045395 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.056295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.091309 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125228 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.125259 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.149043 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.149323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.164679 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165056 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.165121 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.203542 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.208455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.208726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.209709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.210605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.236393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.305114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.411594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48c1471ee6eaa615e5b0e19686e3fafc0f687dc03625988c88b411dc682d223f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:24:26Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417054 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417101 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417123 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417160 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.417211 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.484289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.485059 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511656 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511714 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.511766 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:42Z","lastTransitionTime":"2025-08-13T19:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.548476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.567581 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: E0813 19:50:42.567636 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.604743 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.632592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.674444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.762492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.812684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.840428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:42 crc kubenswrapper[4183]: I0813 19:50:42.906099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.004691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.041613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.062631 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.210345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.210731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.213964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.213974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.214767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.215565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.215584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.216929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.216982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.217417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.218253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.218319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.343986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.422944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.514897 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515258 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515471 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515488903 +0000 UTC m=+414.208153801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515526 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515573466 +0000 UTC m=+414.208238094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515625 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515645408 +0000 UTC m=+414.208313616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515308 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515699 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515731 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.51573433 +0000 UTC m=+414.208399048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515751901 +0000 UTC m=+414.208416629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.515696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515928 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.515990 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.515979677 +0000 UTC m=+414.208644585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.516530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.516514443 +0000 UTC m=+414.209179041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.516683 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.517085 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.517051798 +0000 UTC m=+414.209716786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518172 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518245 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518444 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518493 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.518483599 +0000 UTC m=+414.211148187 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518553 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518563 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.518589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.518591882 +0000 UTC m=+414.211256610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518644 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519254 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.51923774 +0000 UTC m=+414.211902459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518648 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519309783 +0000 UTC m=+414.211974501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518707 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519361484 +0000 UTC m=+414.212026322 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518723 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519424 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519450 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519500 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519488108 +0000 UTC m=+414.212152826 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518725 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519560 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519543879 +0000 UTC m=+414.212208827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.518744 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.519607 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.519597711 +0000 UTC m=+414.212262419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520134 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520193 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.520174697 +0000 UTC m=+414.212839425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.520235 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520625 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.520867 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.520846137 +0000 UTC m=+414.213510855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.520928 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.521262 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.524465 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.52444889 +0000 UTC m=+414.217113698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.526023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.526585 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.526647 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.526632962 +0000 UTC m=+414.219297700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.526707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.527516 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.527570 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.527557568 +0000 UTC m=+414.220222276 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.527322 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.528049 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.528102 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.528090394 +0000 UTC m=+414.220755172 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.528138 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.529140 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.529645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.529621587 +0000 UTC m=+414.222286265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.529743 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.530001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.530276 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.530723 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.531437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.531413499 +0000 UTC m=+414.224078277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.531955 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.537273 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.537219314 +0000 UTC m=+414.229884073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.537416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.635495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.640281 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.641707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.641996 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.642728 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645430 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645523 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645496509 +0000 UTC m=+414.338161197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645598 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645637393 +0000 UTC m=+414.338302011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645695 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.645740 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.645724716 +0000 UTC m=+414.338389334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646011 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646057 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646046705 +0000 UTC m=+414.338711533 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646110 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646147 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646139048 +0000 UTC m=+414.338803666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646248 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646286252 +0000 UTC m=+414.338950920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646358 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646388525 +0000 UTC m=+414.339053233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646647 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.646703 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.646685713 +0000 UTC m=+414.339350411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.647645 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.647740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.648123 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.648200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.648184686 +0000 UTC m=+414.340849784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.662162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.714059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750698 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750867 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.750977 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751001 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751117 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751150 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751354 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751503 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751567 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751625 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.751923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752036 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752069 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752174 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752224 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752688 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.752696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753503 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753528 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753544 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753632 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753713 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.753901 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754029 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754090 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754142 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754205 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754217 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754227 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754311 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754386 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754638 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754658 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.754666 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755092 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755126 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755136 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755431 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755446 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.755453 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762286 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762364 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762430 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762444 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762500 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762545 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.762599 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763275 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763423 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763463 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763538 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763608 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763626 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763638 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763686 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763747 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763942 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.763995 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764065 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764110 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764155 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764215 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764273 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764319 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764384 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764405 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764469 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764527 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764567 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764628 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764693 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764706 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764714 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.764764 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.765020 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.770704 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.770865 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771036 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771071 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771073 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771099 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771182 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771155751 +0000 UTC m=+414.463820489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771200 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771237 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771224952 +0000 UTC m=+414.463889581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771302 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771329 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771341 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771377 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771366877 +0000 UTC m=+414.464031495 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771406 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.771426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771441 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771428418 +0000 UTC m=+414.464093146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771463 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771454529 +0000 UTC m=+414.464119237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77147303 +0000 UTC m=+414.464137728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771505 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77149748 +0000 UTC m=+414.464162068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771509 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771512321 +0000 UTC m=+414.464176909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771525 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771533591 +0000 UTC m=+414.464198189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771594 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771582363 +0000 UTC m=+414.464246961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771595 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771617 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771618 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771605553 +0000 UTC m=+414.464270151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771638 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771631324 +0000 UTC m=+414.464295922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771649985 +0000 UTC m=+414.464314703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771674 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771666855 +0000 UTC m=+414.464331573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771732037 +0000 UTC m=+414.464396745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.771767 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.771754358 +0000 UTC m=+414.464421876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772466 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772456498 +0000 UTC m=+414.465121096 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772483 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772475938 +0000 UTC m=+414.465140536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772499 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772492489 +0000 UTC m=+414.465157087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772509489 +0000 UTC m=+414.465174077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.77253382 +0000 UTC m=+414.465198408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772558 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.7725487 +0000 UTC m=+414.465213298 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772573 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772566491 +0000 UTC m=+414.465231089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772581841 +0000 UTC m=+414.465246439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772597632 +0000 UTC m=+414.465262230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772617542 +0000 UTC m=+414.465282140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772638463 +0000 UTC m=+414.465303061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772653053 +0000 UTC m=+414.465317641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772678 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772669304 +0000 UTC m=+414.465333902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772695 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772685684 +0000 UTC m=+414.465350282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772704965 +0000 UTC m=+414.465369563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772720015 +0000 UTC m=+414.465384603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772734486 +0000 UTC m=+414.465399074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772757 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772750336 +0000 UTC m=+414.465415044 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.772974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772764336 +0000 UTC m=+414.465428924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.772989473 +0000 UTC m=+414.465654061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773031 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773023504 +0000 UTC m=+414.465688102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773047 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773039184 +0000 UTC m=+414.465703772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773064 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773054745 +0000 UTC m=+414.465719343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773074875 +0000 UTC m=+414.465739463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773098 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773091126 +0000 UTC m=+414.465755714 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773114 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773107246 +0000 UTC m=+414.465771844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773122747 +0000 UTC m=+414.465787345 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773142567 +0000 UTC m=+414.465807165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773164398 +0000 UTC m=+414.465828996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.773194 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.773181148 +0000 UTC m=+414.465845746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773268 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773432 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773531 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.773734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774431 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774451 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774535 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774768 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774878 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.774962 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775033 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775086 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775161 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775174 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775248 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775309 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775364 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775443 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775457 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.775514 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776292 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776311 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776325 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.776367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.776356609 +0000 UTC m=+414.469021237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780111 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780096096 +0000 UTC m=+414.472760724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780132587 +0000 UTC m=+414.472797185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780165 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780157448 +0000 UTC m=+414.472822046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780186 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780177648 +0000 UTC m=+414.472842236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780204 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780197099 +0000 UTC m=+414.472861697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780223 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78021662 +0000 UTC m=+414.472881218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780241 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78023447 +0000 UTC m=+414.472899068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780256 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.78024897 +0000 UTC m=+414.472913558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780272 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780265931 +0000 UTC m=+414.472930519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780298 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780289912 +0000 UTC m=+414.472954510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780313 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780306622 +0000 UTC m=+414.472971350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780345 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780336383 +0000 UTC m=+414.473000981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780357064 +0000 UTC m=+414.473021662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780386 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780378114 +0000 UTC m=+414.473042702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780429 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780414835 +0000 UTC m=+414.473079633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.780546 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780706 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780756 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780948 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780997 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781017 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.780997852 +0000 UTC m=+414.473662700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.780954 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781041 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.781032083 +0000 UTC m=+414.473696701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781057 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781070 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.781134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.781122675 +0000 UTC m=+414.473787603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.800352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.859673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.882719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.882877 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.882935 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.882966 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883009 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.882982557 +0000 UTC m=+414.575647395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883026 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883020118 +0000 UTC m=+414.575684866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883117 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.883551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883672 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883699 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883719 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883708177 +0000 UTC m=+414.576372795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883745 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883735428 +0000 UTC m=+414.576400736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883759 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883880 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883868982 +0000 UTC m=+414.576533900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883914 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883937 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883951 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883952 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.883980 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883972025 +0000 UTC m=+414.576636743 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884001 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.883990715 +0000 UTC m=+414.576655423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884158 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884201 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884414 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884488 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.884514 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884567 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884579 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884648 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884652 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884592 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884706 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884720 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884721 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884728 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884744 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884883 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884899 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884909 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.884602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.884590163 +0000 UTC m=+414.577254881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885048 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885163 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885283 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885269792 +0000 UTC m=+414.577934380 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885357 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885380 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885361995 +0000 UTC m=+414.578026673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885392605 +0000 UTC m=+414.578057263 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885411186 +0000 UTC m=+414.578075784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885438 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885440 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885432107 +0000 UTC m=+414.578096735 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885467 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885460627 +0000 UTC m=+414.578125335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885487 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885480158 +0000 UTC m=+414.578144746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885506 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885532079 +0000 UTC m=+414.578196987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.885575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885622 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885638 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885652 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.885677414 +0000 UTC m=+414.578342142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.885768 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886079 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886101 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886112 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886423 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886486 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886467606 +0000 UTC m=+414.579132234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886504607 +0000 UTC m=+414.579169205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886524 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886546 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886556 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886525 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886519508 +0000 UTC m=+414.579184096 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886587 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886616 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886626 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886633 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886625161 +0000 UTC m=+414.579289759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886643371 +0000 UTC m=+414.579308079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.886673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.886664362 +0000 UTC m=+414.579329160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.886722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887184 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887236 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887448 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887608 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.887706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887942 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887984 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.887997 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888029 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888020291 +0000 UTC m=+414.580684909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888083 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888098 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888108 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888126604 +0000 UTC m=+414.580791512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888172 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888198 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888192275 +0000 UTC m=+414.580856993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888244 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888255 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888280448 +0000 UTC m=+414.580945176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888295 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888313 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888328 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888350 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.88833916 +0000 UTC m=+414.581003778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.8883598 +0000 UTC m=+414.581024398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888382 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888393 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888428 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888421622 +0000 UTC m=+414.581086350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888470 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888482 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888490 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888511 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888526 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888535 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888507274 +0000 UTC m=+414.581172002 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888566 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888568106 +0000 UTC m=+414.581232814 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888582 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888593 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888613 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888667 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888656339 +0000 UTC m=+414.581321067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888682109 +0000 UTC m=+414.581346777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888700 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888722161 +0000 UTC m=+414.581386899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888469 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888867 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888880 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.888915 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.888906736 +0000 UTC m=+414.581571474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.890483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.889077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.88905856 +0000 UTC m=+414.581726188 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.949696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.989954 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990154 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990313 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.990387736 +0000 UTC m=+414.683052464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.990492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990832 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990886 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.990991 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.990967033 +0000 UTC m=+414.683631781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991646 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991690 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.991701 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:43 crc kubenswrapper[4183]: I0813 19:50:43.991906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:43 crc kubenswrapper[4183]: E0813 19:50:43.992018 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:47.991993472 +0000 UTC m=+414.684658100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.005340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.099383 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.121366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.129156 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.140241 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.155201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193054 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" exitCode=0 Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193152 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.193071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.210707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:44 crc kubenswrapper[4183]: E0813 19:50:44.211115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.225975 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.228432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.295126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.356001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.423151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.435161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.442647 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.442750 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.459239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.504534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.564752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.591753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.621835 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.651268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.703514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.751489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.837500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.882690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:44 crc kubenswrapper[4183]: I0813 19:50:44.958035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.019943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48c1471ee6eaa615e5b0e19686e3fafc0f687dc03625988c88b411dc682d223f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:24:26Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.096662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.130982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.162479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.198924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.209645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.210709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.212724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.212899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.213675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.214883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.214570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.215694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.215933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.216268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.216348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.232187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.241962 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.253237 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.258222 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.277664 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.294170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.302644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.308194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b"} Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.348601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: E0813 19:50:45.350341 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.381324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.430546 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.430641 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.435459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.478754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.517912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.576271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.613625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.656204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.706152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.751087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.800708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.838103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.871207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.923054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:45 crc kubenswrapper[4183]: I0813 19:50:45.965925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.003440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.040298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.084672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.111724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.150511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.205934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.208577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.209623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.210222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.210708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:46 crc kubenswrapper[4183]: E0813 19:50:46.211304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.326550 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212"} Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.339235 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4"} Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.410512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.471721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.504611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.620223 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.620387 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.716418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.770626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9572cbf27a025e52f8350ba1f90df2f73ac013d88644e34f555a7ae71822234\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:23:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:07Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:46 crc kubenswrapper[4183]: I0813 19:50:46.824290 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.209975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210089 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.210747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.211484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.212892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.212935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.213882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.214513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.214643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.268143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.305553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.366437 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303"} Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.378926 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f"} Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.434695 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.435147 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.501495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613707 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.613762 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.614007 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.614054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.614478 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.614559 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.614536227 +0000 UTC m=+422.307200935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615022 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615069 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615058332 +0000 UTC m=+422.307722950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615160 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615278 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615251447 +0000 UTC m=+422.307916065 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615467 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615516745 +0000 UTC m=+422.308181523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615632 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.615684 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.615670329 +0000 UTC m=+422.308335107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617234 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.617377 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617469 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617525 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.617514212 +0000 UTC m=+422.310179020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617585 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617638 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.617619875 +0000 UTC m=+422.310285223 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.617889 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.618134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618139 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618166 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.61815581 +0000 UTC m=+422.310820398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.618732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.618712646 +0000 UTC m=+422.311377264 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619526 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.619749666 +0000 UTC m=+422.312414274 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.619942 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620317352 +0000 UTC m=+422.312982040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.619764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620417 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620507 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620562 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620621 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.62060515 +0000 UTC m=+422.313269988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620660 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620729 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620718823 +0000 UTC m=+422.313383451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620767 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620911 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620927979 +0000 UTC m=+422.313592607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.620967 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.620980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.620996711 +0000 UTC m=+422.313661329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621069 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621131 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621240 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621258 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.621292 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621369 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621397 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621513 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621566 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621551147 +0000 UTC m=+422.314215955 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621594 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621599 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621584858 +0000 UTC m=+422.314249506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621659 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621712 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621699641 +0000 UTC m=+422.314364429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621734 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621760343 +0000 UTC m=+422.314424931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621902 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621932 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621924168 +0000 UTC m=+422.314588876 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621947 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.621940318 +0000 UTC m=+422.314604916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622000 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622028 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622021571 +0000 UTC m=+422.314686179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.621068 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622167 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622137704 +0000 UTC m=+422.314802412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622095 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622209 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622258 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622245777 +0000 UTC m=+422.314910525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.622400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.622379931 +0000 UTC m=+422.315044609 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.689579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725167 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725303 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725388565 +0000 UTC m=+422.418053453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725475 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725577 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725610 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725643622 +0000 UTC m=+422.418308510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725698 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725723 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.725732255 +0000 UTC m=+422.418396873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.725927 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.725974 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726025 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726013713 +0000 UTC m=+422.418678341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726060 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726105455 +0000 UTC m=+422.418770083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726140266 +0000 UTC m=+422.418804854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726184 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.726227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.726216669 +0000 UTC m=+422.418881407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.727169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.727244 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727638 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.72766413 +0000 UTC m=+422.420328748 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727720 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.727753 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.727745212 +0000 UTC m=+422.420409830 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.743070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.798019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.828950 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829185 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829269 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.829246243 +0000 UTC m=+422.521911081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829461 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.829506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.8294957 +0000 UTC m=+422.522160428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829737 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.829871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830151 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830262 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830322 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830348 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830448 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830510 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830576 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830670 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.830770 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831033 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831064 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831129 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831665 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831900 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.831987 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832239 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832297 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832480 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832694 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832882 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832924 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.832974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833267 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833318 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833350 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833527 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833564 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833685 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.833964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.837627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.837965 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838021 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838035 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838051 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838103 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838083896 +0000 UTC m=+422.530748614 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838124 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838113817 +0000 UTC m=+422.530778425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830022 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838135 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838144 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838152 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838161 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830050 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830098 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838186659 +0000 UTC m=+422.530851347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838247 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838254961 +0000 UTC m=+422.530919669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838285 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838279041 +0000 UTC m=+422.530943629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838300 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838293802 +0000 UTC m=+422.530958400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838308452 +0000 UTC m=+422.530973040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838321 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838330 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838360 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838348013 +0000 UTC m=+422.531012731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838371 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838410 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838401915 +0000 UTC m=+422.531066623 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838414 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838479 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838539 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838590 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838439376 +0000 UTC m=+422.531103994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838621 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838606801 +0000 UTC m=+422.531271419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838641 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838633011 +0000 UTC m=+422.531297599 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838645 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838654 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838648692 +0000 UTC m=+422.531313290 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838676 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838667722 +0000 UTC m=+422.531332330 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838701 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838713 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838735834 +0000 UTC m=+422.531400442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838844 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838834747 +0000 UTC m=+422.531499465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838879 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838900 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838906 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838923 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838931 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838948 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838951881 +0000 UTC m=+422.531616619 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838911 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838975 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838967321 +0000 UTC m=+422.531631929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.838993 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.838980911 +0000 UTC m=+422.531645529 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839013 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839036 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839135 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839156 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839165 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839175 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839044 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839037803 +0000 UTC m=+422.531702421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839200 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839191857 +0000 UTC m=+422.531856465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839218 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839211748 +0000 UTC m=+422.531876336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839242 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839231119 +0000 UTC m=+422.531895707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839256 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83928366 +0000 UTC m=+422.531948278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839297 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839321921 +0000 UTC m=+422.531986539 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839333 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839371 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839358622 +0000 UTC m=+422.532023240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839220 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839397443 +0000 UTC m=+422.532062061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839410 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839442 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839446 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839438954 +0000 UTC m=+422.532103652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839478 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839472065 +0000 UTC m=+422.532136653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839503 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839502776 +0000 UTC m=+422.532167464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839528127 +0000 UTC m=+422.532192895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839565 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839585 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839591 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839625 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839631 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83962072 +0000 UTC m=+422.532285398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839595 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839659 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.83964991 +0000 UTC m=+422.532314629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839680 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839670371 +0000 UTC m=+422.532335129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839707 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839739 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839734773 +0000 UTC m=+422.532399471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839867 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839768684 +0000 UTC m=+422.532433272 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839898 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839905 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839920 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839931 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839935339 +0000 UTC m=+422.532599957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839959 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.839949249 +0000 UTC m=+422.532613867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839992 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839995 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840004 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840012 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840020 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840014021 +0000 UTC m=+422.532678639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840030151 +0000 UTC m=+422.532694759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840059 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840069 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840078 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840101113 +0000 UTC m=+422.532765811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840111 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840138 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839095 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840197 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.830081 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.837980 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840139 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840133744 +0000 UTC m=+422.532798362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840215 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840222027 +0000 UTC m=+422.532886615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840249 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840287 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840243467 +0000 UTC m=+422.532908335 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840309599 +0000 UTC m=+422.532974187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840070 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840349 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840358 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840351351 +0000 UTC m=+422.533015969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.839372 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840300 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840395 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840379601 +0000 UTC m=+422.533044309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840406552 +0000 UTC m=+422.533071170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840431 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840426093 +0000 UTC m=+422.533090681 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840459 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840474 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840481 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840508 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840566 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840614 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840630 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840634 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840648 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840657 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840510 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840500815 +0000 UTC m=+422.533165523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840708 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840699081 +0000 UTC m=+422.533363669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840868 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840856325 +0000 UTC m=+422.533521013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840872 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840884726 +0000 UTC m=+422.533549324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840481 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840710 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840753 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840908 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840902896 +0000 UTC m=+422.533567504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840963 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840950378 +0000 UTC m=+422.533614996 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840990 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840977998 +0000 UTC m=+422.533642586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.840998939 +0000 UTC m=+422.533663527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841015 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841021 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.84101563 +0000 UTC m=+422.533680218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841029 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841043 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841066641 +0000 UTC m=+422.533731249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841092 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841106 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841114 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841131 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841139653 +0000 UTC m=+422.533804261 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.840567 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841163104 +0000 UTC m=+422.533827702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.841187 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.841179374 +0000 UTC m=+422.533844092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842130 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842245 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842271 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.842559 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.842411779 +0000 UTC m=+422.535076617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.874231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.938979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.939137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.939170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939409 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939429 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939592 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.939630198 +0000 UTC m=+422.632294816 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.939725 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.939761842 +0000 UTC m=+422.632426710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940108 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940129 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940141 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940181 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940169033 +0000 UTC m=+422.632833711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940232 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940265 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940268286 +0000 UTC m=+422.632932904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940334 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940345 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940349 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940366 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940391 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940397 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.9403891 +0000 UTC m=+422.633053798 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940458 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940479 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940488 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940511083 +0000 UTC m=+422.633175711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940727 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.940906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940908 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.940896614 +0000 UTC m=+422.633561322 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940987 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941019 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941011157 +0000 UTC m=+422.633675865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941019 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941070 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941084 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941093 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941113 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94111815 +0000 UTC m=+422.633782768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.940570 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941148 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941156 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941180622 +0000 UTC m=+422.633845240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941221 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941231 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941259365 +0000 UTC m=+422.633924053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941301 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941312 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941322536 +0000 UTC m=+422.633987234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941384 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941402 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941412 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941426 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941438 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941445 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941448 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94143846 +0000 UTC m=+422.634103238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941469 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94146257 +0000 UTC m=+422.634127188 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941502 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941509 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.941518 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941520 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941559 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941573 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941584 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941602 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941519 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941664 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941666 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941713 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941732 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941676 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.941534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.941525612 +0000 UTC m=+422.634190390 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942374 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942644 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942716 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.942875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943281 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943332 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943395 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943458 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943438177 +0000 UTC m=+422.636102905 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943499 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.943542 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943529049 +0000 UTC m=+422.636193667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943594 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943629 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943615322 +0000 UTC m=+422.636280020 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943658 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943680 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943692 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943701 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943684044 +0000 UTC m=+422.636348752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943739215 +0000 UTC m=+422.636403903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944075 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944168 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944191 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.943764986 +0000 UTC m=+422.636429574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944206049 +0000 UTC m=+422.636870647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944226 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944220879 +0000 UTC m=+422.636885467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944242 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94423529 +0000 UTC m=+422.636899878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944245 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944259 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944269 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944261 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94425088 +0000 UTC m=+422.636915568 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944304 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944316 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944307182 +0000 UTC m=+422.636971770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944336 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944329442 +0000 UTC m=+422.636994150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944356 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944393 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944408 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944417 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944394 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944385594 +0000 UTC m=+422.637050312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944476 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944492 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944503 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944516 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944527 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944566 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944577 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944469226 +0000 UTC m=+422.637133924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944621 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.94461294 +0000 UTC m=+422.637277528 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944671092 +0000 UTC m=+422.637335680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944696 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944689213 +0000 UTC m=+422.637353801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944705323 +0000 UTC m=+422.637369911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944723814 +0000 UTC m=+422.637388402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944744304 +0000 UTC m=+422.637408902 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.944764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.944758215 +0000 UTC m=+422.637422813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.943607 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: E0813 19:50:47.945173 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:55.945145896 +0000 UTC m=+422.637813094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:47 crc kubenswrapper[4183]: I0813 19:50:47.970464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.025491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.047323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.047561 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048135 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048174 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048191 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048211 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048229 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048241 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.048143 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048473 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048542 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048243422 +0000 UTC m=+422.740908190 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048529 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048654 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048625133 +0000 UTC m=+422.741289911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.048709 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:50:56.048689165 +0000 UTC m=+422.741354073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.160189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.211473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.211613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:48 crc kubenswrapper[4183]: E0813 19:50:48.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.221377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.295030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.390246 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b" exitCode=0 Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.390423 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b"} Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.397025 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b"} Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.435909 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.436378 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.645136 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.691565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.869448 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:48 crc kubenswrapper[4183]: I0813 19:50:48.919496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.108730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.209976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.213345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.215578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.216999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.217974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.218493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:49 crc kubenswrapper[4183]: E0813 19:50:49.221470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.433890 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.433965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.675231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.737545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.863373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:49 crc kubenswrapper[4183]: I0813 19:50:49.933597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.117097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.210288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.211480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.213901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:50 crc kubenswrapper[4183]: E0813 19:50:50.351920 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.412428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652"} Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.416657 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87"} Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.430274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.437261 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:50 crc kubenswrapper[4183]: I0813 19:50:50.437763 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.211932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.211973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.212656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.212990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.213891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.213941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.214915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.214970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.215937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.215502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.216946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.216979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.217232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.217894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:51 crc kubenswrapper[4183]: E0813 19:50:51.218583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.267356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.438392 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:51 crc kubenswrapper[4183]: I0813 19:50:51.438476 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.011508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.092727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.152017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.195546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.214118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.214256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:52 crc kubenswrapper[4183]: E0813 19:50:52.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.255546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.321627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.379498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.438071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.438229 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.445266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.466404 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9"} Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.507118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.587255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.647267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.695147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.760623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.811756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.859250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885758 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885920 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.885949 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:52Z","lastTransitionTime":"2025-08-13T19:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.899433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:52 crc kubenswrapper[4183]: I0813 19:50:52.923967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.206752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.209838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.210374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.210981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.211763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.212996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.213960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.214690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.215999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.216968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.217041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.217102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.248569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.289767 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.291489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307261 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307400 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307423 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.307515 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.337623 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.338296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349112 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349235 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.349255 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.368420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.383148 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391449 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391586 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391609 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391635 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.391668 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.416399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.425267 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447697 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.447973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.448006 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.448058 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:50:53Z","lastTransitionTime":"2025-08-13T19:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.455621 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.482859 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: E0813 19:50:53.482984 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.512518 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa"} Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.525334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.605677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.643417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.707289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.770990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.829555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.870005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.917092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:53 crc kubenswrapper[4183]: I0813 19:50:53.985387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.036606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.082678 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.117446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.178301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:54 crc kubenswrapper[4183]: E0813 19:50:54.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.246101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.317140 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.451056 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.451358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.592488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.667318 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.667770 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668370 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668440 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.668464 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.759257 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:54 crc kubenswrapper[4183]: I0813 19:50:54.912115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.013890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.053062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:48Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ce15d141220317b4e57b1599c379e880d26b45054aa1776fbad6346dd58a55d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce15d141220317b4e57b1599c379e880d26b45054aa1776fbad6346dd58a55d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.126303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.212486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.220343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.221934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.222651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.213911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.214597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.216981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.218536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.219140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.219373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.226435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.226770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.227597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.228215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.229741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.230749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.231995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.233340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.233730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.234090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.234336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.235902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.236745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.237080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.326347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.368346 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.378972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.447613 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.447956 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.461126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.684736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.684974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685044 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.685211 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686283 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686525 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686549 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.68728321 +0000 UTC m=+438.379948088 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.686578 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.687708552 +0000 UTC m=+438.380373180 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.687758 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.688567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.688554646 +0000 UTC m=+438.381219274 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.689384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.689355569 +0000 UTC m=+438.382020177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.689480 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.689461712 +0000 UTC m=+438.382126400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689672 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.689734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690132 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.690684417 +0000 UTC m=+438.383349035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.690422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691306 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.691526 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690473 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690501 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.690309 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.691393 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.691463 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692606 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.692594162 +0000 UTC m=+438.385259040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692992 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.692982133 +0000 UTC m=+438.385646721 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693124 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693110486 +0000 UTC m=+438.385775084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693206309 +0000 UTC m=+438.385870897 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.693320 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.693310182 +0000 UTC m=+438.385974780 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693615 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.693733 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.692076 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694048 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694260 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69429632 +0000 UTC m=+438.386960938 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694471 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.694459495 +0000 UTC m=+438.387124123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694504 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694661 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.694651941 +0000 UTC m=+438.387316559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694740 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69499471 +0000 UTC m=+438.387659438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694659 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695038 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695087 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.695074473 +0000 UTC m=+438.387739331 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.694083 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695126 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.695118654 +0000 UTC m=+438.387783252 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.694214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695230 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695458 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695484 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.695516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.695976 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696095 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696244 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696283 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696457 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696587 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696370 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696411 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.696926 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697436 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.69742201 +0000 UTC m=+438.390086608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697498 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697486782 +0000 UTC m=+438.390151370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697510942 +0000 UTC m=+438.390175660 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697527913 +0000 UTC m=+438.390192591 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697551163 +0000 UTC m=+438.390215761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697572 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697565264 +0000 UTC m=+438.390229852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697588 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697581864 +0000 UTC m=+438.390246542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.697615 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.697604985 +0000 UTC m=+438.390269573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.801620 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.802319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.802378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.802669 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803059 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803083 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.80273274 +0000 UTC m=+438.495397548 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.803576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.803550453 +0000 UTC m=+438.496215061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.803959 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804408 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.804581 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.806009 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.806293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807717 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.807895747 +0000 UTC m=+438.500560485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.807996 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808038 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808028671 +0000 UTC m=+438.500693289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808087 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808114633 +0000 UTC m=+438.500779241 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808166 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808193 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808185625 +0000 UTC m=+438.500850493 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808366 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808393271 +0000 UTC m=+438.501057889 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808947 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.808987 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.808978388 +0000 UTC m=+438.501643106 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.802598 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.811435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.812129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.810984025 +0000 UTC m=+438.504770305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.851263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915256 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915371 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915425 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915666 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915700 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915735 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915788 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.915969 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916017 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916243 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916390 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916415 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916473 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916592 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916653 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916680 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.916784 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917316 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917842 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.917419657 +0000 UTC m=+438.610084385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.917952 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918000 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.917980323 +0000 UTC m=+438.610645041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918051 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918079 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918071946 +0000 UTC m=+438.610736554 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918120 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918154 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918144568 +0000 UTC m=+438.610809406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918200 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.91822545 +0000 UTC m=+438.610890048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918266 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918290 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918282462 +0000 UTC m=+438.610947060 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918348 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918374 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918367454 +0000 UTC m=+438.611032052 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918438 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918459 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918474 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918515 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918504678 +0000 UTC m=+438.611169286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918581 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918594 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918602 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918629 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918622042 +0000 UTC m=+438.611286750 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918674 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918705 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918696024 +0000 UTC m=+438.611360722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918755 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918785 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.918778266 +0000 UTC m=+438.611443244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.918990 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919008 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919017 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919055 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.919045494 +0000 UTC m=+438.611710122 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919418 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919440 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919450 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919506 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919551 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919628 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919640 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919648 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919716 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919855 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919869 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919902 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.919887628 +0000 UTC m=+438.612552236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.919906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919946 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.919979 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.91997055 +0000 UTC m=+438.612635248 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920023 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920052 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920045362 +0000 UTC m=+438.612709970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920094 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920129 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920121635 +0000 UTC m=+438.612786233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920175 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920213 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920201277 +0000 UTC m=+438.612865885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920266 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920278 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920291 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.9203158 +0000 UTC m=+438.612980508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920369 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920389642 +0000 UTC m=+438.613054250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920456 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920484 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920477535 +0000 UTC m=+438.613142243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920526 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.920561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.920553607 +0000 UTC m=+438.613218215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.920878 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921058 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921091 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921082302 +0000 UTC m=+438.613746910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921133 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921155184 +0000 UTC m=+438.613823082 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921053 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921211 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921239 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921232896 +0000 UTC m=+438.613897604 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921276 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921297218 +0000 UTC m=+438.613961826 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921345 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921372 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.92136588 +0000 UTC m=+438.614030488 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921389241 +0000 UTC m=+438.614053829 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921418 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921411411 +0000 UTC m=+438.614075999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921425802 +0000 UTC m=+438.614090460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921441252 +0000 UTC m=+438.614105840 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921492 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921515944 +0000 UTC m=+438.614180542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921573 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921595287 +0000 UTC m=+438.614259895 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921648 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921683 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921668739 +0000 UTC m=+438.614333337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921722 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.921762 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921752631 +0000 UTC m=+438.614417239 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922122 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.921977838 +0000 UTC m=+438.614642706 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922324 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922356188 +0000 UTC m=+438.615020806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922417 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922446 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922438651 +0000 UTC m=+438.615103379 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922525 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922539 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922555 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922594 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922585945 +0000 UTC m=+438.615250683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922617 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922609526 +0000 UTC m=+438.615274124 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922653 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.922736 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.922672687 +0000 UTC m=+438.615337305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923004 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923136 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923326 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923468 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923532 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923557 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923630 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923697 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923725 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.923758 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924055 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.924098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924200 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924221 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924233 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924263 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924254833 +0000 UTC m=+438.616919451 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924374 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924389 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924432 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924423368 +0000 UTC m=+438.617088096 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924472 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924490 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924497 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924506 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924514 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924521 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924543811 +0000 UTC m=+438.617208429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924591 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924657 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924566032 +0000 UTC m=+438.617230620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924743 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924734136 +0000 UTC m=+438.617398854 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924752707 +0000 UTC m=+438.617417415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924778 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924871 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.92485985 +0000 UTC m=+438.617524458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924897 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924932 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924953 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924962 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924976 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925029 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925098 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925111 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925118 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925145 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925163 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925191 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925207 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925232 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925266 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924935 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.924927252 +0000 UTC m=+438.617591980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937200363 +0000 UTC m=+438.629864961 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937243 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937234514 +0000 UTC m=+438.629899102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937253364 +0000 UTC m=+438.629918072 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937274 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937267495 +0000 UTC m=+438.629932093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937299 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937288645 +0000 UTC m=+438.629953233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937312466 +0000 UTC m=+438.629977054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937344 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937336327 +0000 UTC m=+438.630000925 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937358707 +0000 UTC m=+438.630023305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925301 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937573 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925337 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925384 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925434 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925468 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.924743 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925505 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925625 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.925665 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: I0813 19:50:55.928645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.937994 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.937974755 +0000 UTC m=+438.630639383 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938027156 +0000 UTC m=+438.630691774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938059 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938074 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938410227 +0000 UTC m=+438.631074965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938568 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938559662 +0000 UTC m=+438.631224370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938582 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938575782 +0000 UTC m=+438.631240490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938597 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938590642 +0000 UTC m=+438.631255360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938446 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938612 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938630694 +0000 UTC m=+438.631295302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938463 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938654 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938679735 +0000 UTC m=+438.631344463 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938714 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:55 crc kubenswrapper[4183]: E0813 19:50:55.938742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:11.938735957 +0000 UTC m=+438.631400585 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.025592 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.025701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026004 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026156 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026292 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026429 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.026744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027027 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027185 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027669 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.027971 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028015 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028156 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028265 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.028649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.028960 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029188 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029166811 +0000 UTC m=+438.721831769 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029301 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029321 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029334 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029371 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029360627 +0000 UTC m=+438.722025315 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029494 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029526441 +0000 UTC m=+438.722191140 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029607 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029621 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029631 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029662255 +0000 UTC m=+438.722326953 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029736 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029780 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.029768928 +0000 UTC m=+438.722433606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.029966 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030001685 +0000 UTC m=+438.722666503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030070 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030109 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030092478 +0000 UTC m=+438.722757176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030173 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030188 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030200 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030236 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030224091 +0000 UTC m=+438.722888779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030290 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030317534 +0000 UTC m=+438.722982232 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030379 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030419 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030409057 +0000 UTC m=+438.723073755 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030467 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030515 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030503499 +0000 UTC m=+438.723168187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030565 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030608 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030597342 +0000 UTC m=+438.723262030 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030673 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030691 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030702 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.030744 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.030725956 +0000 UTC m=+438.723390654 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031240 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052060916 +0000 UTC m=+438.744725524 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031395 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052114667 +0000 UTC m=+438.744779285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031469 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052146 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052160 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052194 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052186179 +0000 UTC m=+438.744850787 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031518 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052217 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052225 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052242381 +0000 UTC m=+438.744906989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.031562 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052270 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052277 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052299 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052293622 +0000 UTC m=+438.744958230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038112 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052325 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052353 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052346404 +0000 UTC m=+438.745011012 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038147 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052384525 +0000 UTC m=+438.745049133 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038188 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052415 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052423 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.052752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.052446107 +0000 UTC m=+438.745110715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038222 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053401 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053413 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053450 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053438615 +0000 UTC m=+438.746103293 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038271 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053477 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053486 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053516757 +0000 UTC m=+438.746181375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038317 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053551 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053562 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053585829 +0000 UTC m=+438.746250447 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038353 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.0536242 +0000 UTC m=+438.746288818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038405 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053660101 +0000 UTC m=+438.746324789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038456 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053702 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053711 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053738 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053729263 +0000 UTC m=+438.746393961 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038493 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.053783 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.053774815 +0000 UTC m=+438.746439503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038522 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054154 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054143315 +0000 UTC m=+438.746808003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.038554 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054186946 +0000 UTC m=+438.746851634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039152 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054221 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054233 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054256728 +0000 UTC m=+438.746921436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039202 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054292 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054302 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054334 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.05432556 +0000 UTC m=+438.746990238 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039241 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054366301 +0000 UTC m=+438.747030989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039293 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054394 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054423 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054417813 +0000 UTC m=+438.747082431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039337 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054442 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054449 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054478 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054470504 +0000 UTC m=+438.747135142 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.039593 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.054517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.054508025 +0000 UTC m=+438.747172713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.063362 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.063190003 +0000 UTC m=+438.755854841 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.069783 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.131035 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133448 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133513 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.133528 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.134696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.135692 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.137675 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.137709 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.135741 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.135714826 +0000 UTC m=+438.828379424 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.138024 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.137987071 +0000 UTC m=+438.830652129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.141418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142038 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142077 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.142251 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:12.142238353 +0000 UTC m=+438.834903071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.184985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.209896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.209993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.210593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.210987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.211124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:56 crc kubenswrapper[4183]: E0813 19:50:56.211261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.247521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.293759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.333889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.391443 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.433995 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.434142 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.434338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.557632 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9"} Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.656619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:56 crc kubenswrapper[4183]: I0813 19:50:56.900761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.113489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.215904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.215986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.216672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.216746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.218157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.218341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.218918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.219538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.219689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.219786 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220778 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.220958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.220964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.221711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.221749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.225035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.227431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.227920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.227983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.228493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.228087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.229981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:57 crc kubenswrapper[4183]: E0813 19:50:57.230055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.253521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.330164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.401170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.446947 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.447375 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.468128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.495680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.528711 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.556147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.595352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.677505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.724467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.802921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.845900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.891453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:57 crc kubenswrapper[4183]: I0813 19:50:57.932938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.009762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.134739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:50:58 crc kubenswrapper[4183]: E0813 19:50:58.210612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.286195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.322688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.366602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.400446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.436993 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.437129 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.438613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.475304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.508161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.537058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.573022 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87" exitCode=0 Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.573114 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87"} Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.574289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.600757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.628072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.649170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.686045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.715759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.748028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.770996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.797042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.827005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.871950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.905761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.947086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:58 crc kubenswrapper[4183]: I0813 19:50:58.974070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.020358 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.075759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.129960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.169723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.214580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.214738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.214984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.215712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.215947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.216440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.216713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218842 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.218985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.218979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.219398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.219754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.220597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.220945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.221777 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.222373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.224393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.224581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:50:59 crc kubenswrapper[4183]: E0813 19:50:59.225576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.237571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.291390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.322467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.350143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.406518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.435894 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:50:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:50:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:50:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.436017 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.445968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.473691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.501036 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.531341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.613399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.652141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.686485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.728089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.758686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.806360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.848123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.894343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.940165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:50:59 crc kubenswrapper[4183]: I0813 19:50:59.991706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:50:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.075096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.136959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.192562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.208975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.209662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.210013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.210108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.233030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.264467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.293098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.322323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.361048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: E0813 19:51:00.378410 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.391430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.425056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.433337 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.433920 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.461912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.525496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.552112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.579068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.606494 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf" exitCode=0 Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.606575 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf"} Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.618186 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6"} Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.622722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.658214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.694858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.734452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.807626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.833256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.858683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.901316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.932641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.961318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:00 crc kubenswrapper[4183]: I0813 19:51:00.999348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.036401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.063490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.114161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.147094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.179297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.210461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.210863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.211691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.211704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212778 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.212950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.213945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.215660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.216049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.216562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.217613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.218949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.219976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.220108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:01 crc kubenswrapper[4183]: E0813 19:51:01.220255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.250750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.281051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.309018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.343007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.370518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.398015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.457964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.458680 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.468953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.532073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.613927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.659184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.715514 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.766423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.805048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.837733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.880652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:01 crc kubenswrapper[4183]: I0813 19:51:01.923471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d4e207328f4e3140d751e6046a1a8d14a7f392d2f10d6248f7db828278d0972\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://455c9dcaca7ee7118b89a599c97b6a458888800688dd381f8c5dcbd6ba96e17d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.001616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.034109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.065888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.108057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.148203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.175307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.199300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.210538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:02 crc kubenswrapper[4183]: E0813 19:51:02.210896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.232958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.262201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.284102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.312208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.346709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.374514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.425160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.432559 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.432655 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.469506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.498331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.535115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.562049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.613405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.652239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.658277 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6" exitCode=0 Aug 13 19:51:02 crc kubenswrapper[4183]: I0813 19:51:02.658379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.074150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.128730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.191086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.208863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.209894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.210874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.211893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.212065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.212105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.212879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.219195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.258186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.320661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.353456 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.385083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.415414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.437119 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.437734 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.472669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.516271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.561183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.599609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.648547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.701203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.738746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.739324 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.740385 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.740622 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.751859 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.864463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865572 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865613 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.865695 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.891201 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910497 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910561 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.910637 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.928645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: E0813 19:51:03.969222 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.971533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980643 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980716 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980741 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:03 crc kubenswrapper[4183]: I0813 19:51:03.980764 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:03Z","lastTransitionTime":"2025-08-13T19:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.000279 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014669 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014713 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014730 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014759 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.014887 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:04Z","lastTransitionTime":"2025-08-13T19:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.036882 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.056893 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.058050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.060678 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062055 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.062748 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:04Z","lastTransitionTime":"2025-08-13T19:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.091697 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.091757 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.099907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.147366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.176675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.193186 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.209997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:04 crc kubenswrapper[4183]: E0813 19:51:04.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.217704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.269547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.280166 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.356140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.397459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.433765 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.434007 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.468167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.524495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.616197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.650058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.671362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.755020 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c"} Aug 13 19:51:04 crc kubenswrapper[4183]: I0813 19:51:04.921579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:04.999980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.038347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.063112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.093964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.176644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0ea8f66b79c23a45ba2f75937377749519dc802fb755a7fce9c90efb994507\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.211895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.211979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.212712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.212256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.214629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.214691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214775 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.214951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.215702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.215874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.216644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.216732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.217144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.217747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.218994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.219079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.219119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.229455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.350753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: E0813 19:51:05.382071 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.420324 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.434598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.434696 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.482096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.523588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.590380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.617719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.722247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.778175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.927874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:05 crc kubenswrapper[4183]: I0813 19:51:05.970170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.065532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.125139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.209930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:06 crc kubenswrapper[4183]: E0813 19:51:06.210267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.432341 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:06 crc kubenswrapper[4183]: I0813 19:51:06.432441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.211643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.212436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.212635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.213709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.213967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.214667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.214986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.215162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.215388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.215572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.215764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.216767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.217396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.217551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.218303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:07 crc kubenswrapper[4183]: E0813 19:51:07.218644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.432438 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.432909 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.495766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.548358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.618118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.646137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.669107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.692050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.738361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.804098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.833114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.862096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.898239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.939601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:07 crc kubenswrapper[4183]: I0813 19:51:07.972192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.027572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.061320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.097478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.127744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.148870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.179521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.205912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.208590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.209983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:08 crc kubenswrapper[4183]: E0813 19:51:08.210068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.234059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.264989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.289536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.434084 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.434184 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.916143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.949487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:08 crc kubenswrapper[4183]: I0813 19:51:08.984255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.013006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.051369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.076465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.108584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.144491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.174097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.207762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.208675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.208667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.210743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.211979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.212121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.212926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.213192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.214023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:09 crc kubenswrapper[4183]: E0813 19:51:09.214114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.238434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.274247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.298652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.432176 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.432303 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.825172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.850296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.889281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.918575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.948323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.972630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:09 crc kubenswrapper[4183]: I0813 19:51:09.990956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.011964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.059552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.112084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.196681 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.210760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.210984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.211405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.243285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: E0813 19:51:10.388547 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.444178 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.444296 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.530199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.575148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.630057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.681587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.721430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.774552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.814963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.846009 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c" exitCode=0 Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.846079 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c"} Aug 13 19:51:10 crc kubenswrapper[4183]: I0813 19:51:10.850105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.084491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.116248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.137138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.180652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.201934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.209993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.211416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.211692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.212368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.212900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.214661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.215984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.216988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.217660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.218506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.219170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.223245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.223982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.238187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.252902 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.261025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.281422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.313526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.340654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.369495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.389295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.407039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.432526 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.432621 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.447233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.467723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.492706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.517337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.545198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.567309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.586343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.606654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.631234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.649246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.666476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.679672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.701056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.721531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.742491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.759196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.772995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773082 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773139 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773192 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773230 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.773207915 +0000 UTC m=+470.465872723 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.773255 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.773240246 +0000 UTC m=+470.465904964 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773293 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773379 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773409 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773437 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773472 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773544 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773720 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773865 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.773989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774147 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774641 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774705 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.774866 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775040 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775115 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775096969 +0000 UTC m=+470.467761707 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775172 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775243 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775227413 +0000 UTC m=+470.467892221 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775298 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775329 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775321926 +0000 UTC m=+470.467986554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775384 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775429169 +0000 UTC m=+470.468093897 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775477 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775496171 +0000 UTC m=+470.468160779 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775538 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775561 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775554363 +0000 UTC m=+470.468219071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775620 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775633 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775645 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775664646 +0000 UTC m=+470.468329374 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775712 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775737 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775731128 +0000 UTC m=+470.468395856 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775858 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775897 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775887872 +0000 UTC m=+470.468552600 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775941 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.775966 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.775959624 +0000 UTC m=+470.468624552 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776001 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776017276 +0000 UTC m=+470.468682024 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776055 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776070067 +0000 UTC m=+470.468734795 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776111 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776130729 +0000 UTC m=+470.468795457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776173 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776195 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776188091 +0000 UTC m=+470.468852789 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776247 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776260 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776278083 +0000 UTC m=+470.468942791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776322 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776345 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776338965 +0000 UTC m=+470.469003663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776379 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776394707 +0000 UTC m=+470.469059315 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776434 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776452058 +0000 UTC m=+470.469116776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776489 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776514 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.77650713 +0000 UTC m=+470.469171738 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776547 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776571 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776565181 +0000 UTC m=+470.469229799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776605 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776636 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.776627503 +0000 UTC m=+470.469292141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776920 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.777015 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.777002534 +0000 UTC m=+470.469667382 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.776700 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.777065 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.777057855 +0000 UTC m=+470.469722583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.782555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.803531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.821089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.838261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.855374 4183 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be" exitCode=0 Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.855422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be"} Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.871927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876125 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876314 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876341 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.876373 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.877594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.877687 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878534 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878613548 +0000 UTC m=+470.571278276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878685 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878706221 +0000 UTC m=+470.571370829 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878750 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878855 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878768702 +0000 UTC m=+470.571433430 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878901 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878921087 +0000 UTC m=+470.571585705 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878963 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.878985 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.878979138 +0000 UTC m=+470.571643756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879015 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.8790307 +0000 UTC m=+470.571695318 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879070 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879185 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879177064 +0000 UTC m=+470.571841682 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879233 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879311 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879253696 +0000 UTC m=+470.571918324 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879362 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.879416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.879409461 +0000 UTC m=+470.572074189 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.904850 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.927889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.946533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.968362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979616 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979870 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979923 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.979995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980027 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980079 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980130 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980157 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980369 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980467 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980518 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980605 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980631 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980656 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980709 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980907 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980965 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.980999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981086 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981110 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.981209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981274 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981315 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981369 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981347424 +0000 UTC m=+470.674012162 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981383 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981394 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981383195 +0000 UTC m=+470.674047793 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981403316 +0000 UTC m=+470.674068034 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981463 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981495018 +0000 UTC m=+470.674159756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981544 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981555 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981570 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98156337 +0000 UTC m=+470.674228098 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981583301 +0000 UTC m=+470.674248019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981603 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981628 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981621632 +0000 UTC m=+470.674286350 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981680 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981710 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981702304 +0000 UTC m=+470.674366932 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981726 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981747 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981863 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981766226 +0000 UTC m=+470.674430854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981889 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981880149 +0000 UTC m=+470.674544887 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981912 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981936 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981951652 +0000 UTC m=+470.674616340 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981986 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.981997 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.981989143 +0000 UTC m=+470.674653741 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982011 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982004583 +0000 UTC m=+470.674669171 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982051 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982063 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982075 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982069145 +0000 UTC m=+470.674733873 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982077 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982097 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982110 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982124796 +0000 UTC m=+470.674789404 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982149 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982141687 +0000 UTC m=+470.674806275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982165 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982186978 +0000 UTC m=+470.674851596 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982220 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982237 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982242 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982253 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982253 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982272091 +0000 UTC m=+470.674936889 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982298 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982290091 +0000 UTC m=+470.674954809 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982317 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982309542 +0000 UTC m=+470.674974130 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982334 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982345 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982359 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982368 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982376 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982365853 +0000 UTC m=+470.675030521 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982385 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982394 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982408 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982421 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982412705 +0000 UTC m=+470.675077323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982425 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982439 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982431585 +0000 UTC m=+470.675096173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982456 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982448116 +0000 UTC m=+470.675112904 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982458 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982488 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982493 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982487107 +0000 UTC m=+470.675151725 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982502 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982511 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982520 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982535 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982531998 +0000 UTC m=+470.675196726 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982490 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982582 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982617 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982632 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982640 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982659 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982685 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982723 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982732 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982866 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982881 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982894 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982881 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982934 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982936 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983119 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983123 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982371 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982944 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982224 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.982984 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.982966241 +0000 UTC m=+470.675630979 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98331422 +0000 UTC m=+470.675978929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983343 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983335961 +0000 UTC m=+470.676000549 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983052 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983362 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983355932 +0000 UTC m=+470.676020530 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983384 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983376992 +0000 UTC m=+470.676041590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983406 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983391963 +0000 UTC m=+470.676056671 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983417733 +0000 UTC m=+470.676082411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983434204 +0000 UTC m=+470.676135313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983503 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983494216 +0000 UTC m=+470.676158894 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983510786 +0000 UTC m=+470.676175474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.983534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.983525877 +0000 UTC m=+470.676190475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.983574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984142 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984428 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984496 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984626 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984766 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984875 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984917 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.984979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985047 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985073 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985190 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.985252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985516 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985530 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985539 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985572 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985562785 +0000 UTC m=+470.678227523 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985649 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985685668 +0000 UTC m=+470.678350366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985872 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985705159 +0000 UTC m=+470.678369837 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985898 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985889384 +0000 UTC m=+470.678554092 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985912 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985906435 +0000 UTC m=+470.678571143 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985920805 +0000 UTC m=+470.678585513 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985942 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.985935475 +0000 UTC m=+470.678600073 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.985992 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986006 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986015 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986042 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986035038 +0000 UTC m=+470.678699776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986085 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986097 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986106 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986130 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986123101 +0000 UTC m=+470.678787829 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986166 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986183532 +0000 UTC m=+470.678848140 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986232 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986242 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986251 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986275 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986268455 +0000 UTC m=+470.678933443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986309 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986324746 +0000 UTC m=+470.678989465 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986369 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986379 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986388 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986405849 +0000 UTC m=+470.679070587 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986447 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.98646266 +0000 UTC m=+470.679127368 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986512 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986521 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986545 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986539113 +0000 UTC m=+470.679203731 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986581 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.986604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.986598654 +0000 UTC m=+470.679263272 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987126 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987170 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987205112 +0000 UTC m=+470.679869720 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987251 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987269594 +0000 UTC m=+470.679934322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987307 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987322555 +0000 UTC m=+470.679987173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987383 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987395 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987422 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987445 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987465 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987477 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987485 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987500 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987524 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987530 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987535 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987543 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987555 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987620 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987645 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987669 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987419648 +0000 UTC m=+470.680084376 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987689366 +0000 UTC m=+470.680353954 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987719 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987709946 +0000 UTC m=+470.680374534 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987727907 +0000 UTC m=+470.680392495 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987747 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987741927 +0000 UTC m=+470.680406515 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987760 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987755017 +0000 UTC m=+470.680419615 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987895 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987769398 +0000 UTC m=+470.680544489 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987917 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987910832 +0000 UTC m=+470.680575420 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: E0813 19:51:11.987935 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:43.987929572 +0000 UTC m=+470.680594160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:11 crc kubenswrapper[4183]: I0813 19:51:11.990337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.009479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dba0ea54e565345301e3986d0dd8c643d32ea56c561c86bdb4d4b35fa49a453\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2024-06-27T13:21:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:13Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.027937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.052380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.071366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087525 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087549 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087577 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087634 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.087993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088224 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088270 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.088342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088503 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088526 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088550 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088603 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088587829 +0000 UTC m=+470.781252447 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088660 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088671 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088678 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088703 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088696492 +0000 UTC m=+470.781361110 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088743 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088760304 +0000 UTC m=+470.781424912 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088910 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088940 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088932999 +0000 UTC m=+470.781597607 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088972 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.088999 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.088990601 +0000 UTC m=+470.781655399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089045 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089067713 +0000 UTC m=+470.781732651 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089123 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089153 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089144575 +0000 UTC m=+470.781809373 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089212 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089249 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089238618 +0000 UTC m=+470.781903326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089257 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089323 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089356 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089341771 +0000 UTC m=+470.782006499 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089388 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089406333 +0000 UTC m=+470.782070951 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089456 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089471 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089482 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089506105 +0000 UTC m=+470.782170774 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089546 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089557 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089566 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.089592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.089584738 +0000 UTC m=+470.782249366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.089639 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094238 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094322 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094349 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094424 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.094470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.094431806 +0000 UTC m=+470.787096604 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095057 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.095038014 +0000 UTC m=+470.787702712 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095237 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095257 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095269 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095319 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.095306281 +0000 UTC m=+470.787970969 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.095621 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096503 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096536 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096554 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096700 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096881 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.096739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.096719552 +0000 UTC m=+470.789384230 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.097979 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098004 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098050 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.098317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.097172025 +0000 UTC m=+470.789836673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.098479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098952 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.098980 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099099 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.098925045 +0000 UTC m=+470.791589653 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.09912044 +0000 UTC m=+470.791785028 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.099143831 +0000 UTC m=+470.791808469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.099730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.099711007 +0000 UTC m=+470.792375715 (durationBeforeRetry 32s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.100349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.100468 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.101294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.10120308 +0000 UTC m=+470.793867898 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.101298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.105427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106400 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.106885 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106924 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106954 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.106967 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107016596 +0000 UTC m=+470.799681254 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.101617 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107087 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107079328 +0000 UTC m=+470.799743946 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107306 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107323 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107332 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107360976 +0000 UTC m=+470.800025594 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107470 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107502 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107516 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107546 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107566 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107578 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107556381 +0000 UTC m=+470.800221119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.107933 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.107920772 +0000 UTC m=+470.800585370 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109595 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109643 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.109632791 +0000 UTC m=+470.802297409 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109751 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109766 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109860 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.109881 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109907 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.109895198 +0000 UTC m=+470.802559826 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.109945 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110001 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110033 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110024812 +0000 UTC m=+470.802689430 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110110 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110119 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110188447 +0000 UTC m=+470.802853275 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.110232 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.110223158 +0000 UTC m=+470.802887826 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.110482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111062 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.111252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.111331929 +0000 UTC m=+470.803996717 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111509 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.11170629 +0000 UTC m=+470.804370998 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111887 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111922 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111936 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.111993 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.111979728 +0000 UTC m=+470.804644426 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.112028 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.112078 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.1120664 +0000 UTC m=+470.804731068 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.115026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115381 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115405 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115426 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.115488 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.115469188 +0000 UTC m=+470.808134116 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.125390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.160298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.177964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.207374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.209917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.210968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.211039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.218271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.219298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219700 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219926 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220010 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220065 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220121 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220103548 +0000 UTC m=+470.912768326 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.219955 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220153 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220166 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220201 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220189081 +0000 UTC m=+470.912853879 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.220020 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.219741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:12 crc kubenswrapper[4183]: E0813 19:51:12.222958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:51:44.220358935 +0000 UTC m=+470.913023683 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.233442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.251069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.267173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.285887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.303890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.318966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.336464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.351733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.367608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.392973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.412932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.426069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.432979 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.433088 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.443656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.462753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.490319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.503713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.522402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.545945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.561971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.580005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.603304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.624736 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.643679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.662195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.678466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.698843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.715483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.731612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.747235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.772334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.797919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.814680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.832541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.851410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.865155 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f"} Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.875411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.895211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.912060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.938242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.954307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.971114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:12 crc kubenswrapper[4183]: I0813 19:51:12.989738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.004040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.023087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.045277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.057881 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.080718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.096717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.111548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.129164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.141299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.162088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.180579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.195850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.208867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.208942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.210848 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.210928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.211236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.211975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.212029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.212672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.212744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:13 crc kubenswrapper[4183]: E0813 19:51:13.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.216768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.241924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.260389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.275694 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.302278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.337870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.377087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.417746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.431914 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.431982 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.463690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.501359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.539684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.578988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.616479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.657338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.700046 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.738332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.778629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.830527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.881299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.918914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.945133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:13 crc kubenswrapper[4183]: I0813 19:51:13.977665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.022864 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.057453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.106942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.139392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.176210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.209973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.223399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.258626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.302991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.314875 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315148 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315248 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315374 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.315496 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.335936 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.339166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.341630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.341916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.342413 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.360299 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365857 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365874 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365893 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.365920 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.386918 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391526 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391559 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.391601 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.409225 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413737 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413941 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.413976 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.414015 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:14Z","lastTransitionTime":"2025-08-13T19:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.421178 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.432215 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.432302 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.432905 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: E0813 19:51:14.432958 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.457277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.497870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.541958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.584955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.628287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.667723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.701660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.738179 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.813600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.834432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.860175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.901732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.946650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:14 crc kubenswrapper[4183]: I0813 19:51:14.977707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.018738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.056296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.096667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.140752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.188368 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.209744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.211897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.211966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.212947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.213080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.213485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.214893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.215760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.216579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.217336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.224061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.285058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: E0813 19:51:15.389688 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.432067 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.432156 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.510596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.669903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.828137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.871104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:15 crc kubenswrapper[4183]: I0813 19:51:15.984308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.033579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.064140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.211569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:16 crc kubenswrapper[4183]: E0813 19:51:16.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.273030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.338308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.363562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.385404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.404099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.421513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.432631 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.432723 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.440407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.474394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.494576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.515876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.534547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.555438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.573903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.617191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.637756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.655413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.671890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.703082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.727226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.745717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.760621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.775566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.810502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.833505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.850015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.867168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.891755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.904583 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453"} Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.908658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.924159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.939621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.956049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.971838 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:16 crc kubenswrapper[4183]: I0813 19:51:16.991393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.005948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.021381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.040133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.056056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.072695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.099019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.139018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.176695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.208992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.210740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.211983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.212912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.212957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.213292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:17 crc kubenswrapper[4183]: E0813 19:51:17.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.220088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b065562fefc63a381832e1073dc188f7f27d20b65780f1c54a9aa34c767a3b80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:28:38Z\\\",\\\"message\\\":\\\"Thu Jun 27 13:21:15 UTC 2024\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:21:14Z\\\"}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.258336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.300519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.339115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.380173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.421761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.431862 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.431965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.456697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.498064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.548357 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.577003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.622515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.660262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.698527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.739460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.776891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.817708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.859649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.896351 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.940258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:17 crc kubenswrapper[4183]: I0813 19:51:17.977728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.020480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.058168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.103326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.143005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.176901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.208989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:18 crc kubenswrapper[4183]: E0813 19:51:18.209381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.220355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.260275 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.298617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.339464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.381536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.418447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.432713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.432906 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.460858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.498501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.542007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.580348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.617100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.657993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.698140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.738018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.786119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.818503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.856961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.899682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.941592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:18 crc kubenswrapper[4183]: I0813 19:51:18.975461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.019726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.060018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.098167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.138641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.177624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.208497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.208698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.210954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.210979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.209671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.209764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.211883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.211955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.212704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.212914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.213072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.214975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.215357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:19 crc kubenswrapper[4183]: E0813 19:51:19.215365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.220908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.259021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.300585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.338432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.380912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.419832 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.431902 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.432320 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.458858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.500419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.537872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.577374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.623367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.766701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.790947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.820256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.850595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.874545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.892241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.909876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.938441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:19 crc kubenswrapper[4183]: I0813 19:51:19.977522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.017315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.057197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.098237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.139069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.184590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.208644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.209866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.209964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.218435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.259151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.296561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.340897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.379992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: E0813 19:51:20.392194 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.423043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.432402 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.432498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.458549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.500177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.541717 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.579589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.623195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.658644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.703630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.739581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.777685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.817014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.864239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.900465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.938106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:20 crc kubenswrapper[4183]: I0813 19:51:20.979222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.024705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.208965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.211420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.211916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.212407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.212704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.212866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.213878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.215254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.215363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.218528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.218959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:21 crc kubenswrapper[4183]: E0813 19:51:21.219540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.432517 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:21 crc kubenswrapper[4183]: I0813 19:51:21.433903 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.209970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:22 crc kubenswrapper[4183]: E0813 19:51:22.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.432099 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:22 crc kubenswrapper[4183]: I0813 19:51:22.432193 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.209986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.210852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.211922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.212247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.212980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:23 crc kubenswrapper[4183]: E0813 19:51:23.213534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.431706 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:23 crc kubenswrapper[4183]: I0813 19:51:23.431872 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.208988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.210663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.432906 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.433026 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639317 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639385 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639401 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639421 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.639447 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.653677 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.658767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659184 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.659402 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.674016 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679322 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679390 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679493 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679525 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.679655 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.696555 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701824 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701844 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.701862 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.702191 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.716616 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721700 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721765 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721853 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:24 crc kubenswrapper[4183]: I0813 19:51:24.721878 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:24Z","lastTransitionTime":"2025-08-13T19:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.738284 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:24 crc kubenswrapper[4183]: E0813 19:51:24.738362 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.209911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.211962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.212906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.214344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.231529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.249615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.265732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.279593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.295707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.313038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.328375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.345296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.367307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.383495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: E0813 19:51:25.393295 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.400683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.416499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.432475 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.432588 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.433335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.457061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.474546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.490258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.509655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.527202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.545919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.565131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.580255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.613675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.629380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.649561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.666564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.685427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.707308 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.724742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.746955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.766518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.785331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.804706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.828198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.844508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.862140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.880048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.895446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.920745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.948183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:25 crc kubenswrapper[4183]: I0813 19:51:25.982439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.005452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.022371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.040235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.061654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.084113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.099721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.116106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.131947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.146928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.170229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.183011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.196946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.208720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.208890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:26 crc kubenswrapper[4183]: E0813 19:51:26.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.215950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.232440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.249114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.272082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.288483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.305123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.320452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.337512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.352419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.368181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.386370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.402988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.421871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.432242 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.432343 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.440943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:26 crc kubenswrapper[4183]: I0813 19:51:26.456318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.208920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.208680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.212958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:27 crc kubenswrapper[4183]: E0813 19:51:27.214492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.433882 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:27 crc kubenswrapper[4183]: I0813 19:51:27.434002 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.208712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.208877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:28 crc kubenswrapper[4183]: E0813 19:51:28.209893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.433038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:28 crc kubenswrapper[4183]: I0813 19:51:28.433173 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.210993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.211886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.212959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.213383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.213929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:29 crc kubenswrapper[4183]: E0813 19:51:29.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.433432 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:29 crc kubenswrapper[4183]: I0813 19:51:29.433544 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.208156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.208441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.208879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.210018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:30 crc kubenswrapper[4183]: E0813 19:51:30.395481 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.433977 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:30 crc kubenswrapper[4183]: I0813 19:51:30.434108 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.212638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.212946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.213727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.213927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.214622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.214985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.215874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.215993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.216210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.216295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.216721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.217587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217835 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.217842 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.218947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.219121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.219844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.220748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.221190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.222933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:31 crc kubenswrapper[4183]: E0813 19:51:31.223293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.433089 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:31 crc kubenswrapper[4183]: I0813 19:51:31.433191 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.208576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.209926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:32 crc kubenswrapper[4183]: E0813 19:51:32.210071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.432598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:32 crc kubenswrapper[4183]: I0813 19:51:32.432690 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.210970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.211951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.212933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.213647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.213666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.214991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.215904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:33 crc kubenswrapper[4183]: E0813 19:51:33.216359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.433117 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:33 crc kubenswrapper[4183]: I0813 19:51:33.433221 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.433364 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.433469 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.869755 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870279 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870328 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870375 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.870426 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.893462 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899691 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899726 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899738 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899756 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.899874 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.914523 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919409 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919505 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919530 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.919560 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.935607 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941412 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941546 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941570 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941596 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.941625 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.956460 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962156 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962179 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962222 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:34 crc kubenswrapper[4183]: I0813 19:51:34.962253 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:34Z","lastTransitionTime":"2025-08-13T19:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.977525 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:34Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:34 crc kubenswrapper[4183]: E0813 19:51:34.977593 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.209906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.211929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.212872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.213083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.214902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.214981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.215410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.215871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.216549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.216639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.217966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.218360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.218536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.218707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.219994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.220747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.233218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.248274 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.262142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.282086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.300956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.326733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.344253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.361191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.376080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.390425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: E0813 19:51:35.398056 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.413960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.430656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.432073 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.432149 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.448265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.464224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.484653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.509143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.523885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.539186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.553393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.574891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.598559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.615859 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.633722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.653005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.669221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.684397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.700524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.716653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.735922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.752281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.772240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.795576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.811142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.826870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.846876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.864673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.880239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.895552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.910379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.926482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.941238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.955151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:35 crc kubenswrapper[4183]: I0813 19:51:35.971067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.000906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.017406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.035327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.050655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.065579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.101752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.172718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.187702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.208749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:36 crc kubenswrapper[4183]: E0813 19:51:36.210357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.212151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.227295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.242123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.256967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.267353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.282965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.298417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.317515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.334164 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.351543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.368673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.385298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.399928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.417266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.431895 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.432192 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.432671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:36 crc kubenswrapper[4183]: I0813 19:51:36.448930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.210697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.210973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.211901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.211949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.212132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.213214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.213855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.214038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:37 crc kubenswrapper[4183]: E0813 19:51:37.214174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.437644 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:37 crc kubenswrapper[4183]: I0813 19:51:37.437841 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.208923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:38 crc kubenswrapper[4183]: E0813 19:51:38.209657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.431243 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:38 crc kubenswrapper[4183]: I0813 19:51:38.431333 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.209968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.210895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.210974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.211911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.212686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.213252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.213714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.213948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.214298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.214402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:39 crc kubenswrapper[4183]: E0813 19:51:39.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.433011 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:39 crc kubenswrapper[4183]: I0813 19:51:39.433108 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.209917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.267242 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Aug 13 19:51:40 crc kubenswrapper[4183]: E0813 19:51:40.400725 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.432900 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:40 crc kubenswrapper[4183]: I0813 19:51:40.433040 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.209917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.209926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.210632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.210970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.211956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.211981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.212649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.212998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.213735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.213769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.214772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.214986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.215921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.215970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.216040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.216931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:41 crc kubenswrapper[4183]: E0813 19:51:41.217929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.436074 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:41 crc kubenswrapper[4183]: I0813 19:51:41.436377 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:42 crc kubenswrapper[4183]: E0813 19:51:42.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.433429 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:42 crc kubenswrapper[4183]: I0813 19:51:42.433547 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.208903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.209724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.212749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.212926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.213129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.213357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.215137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.432240 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.432343 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798372 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.798525 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.798622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.798600749 +0000 UTC m=+534.491265497 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.798951 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799012 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799126 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799180 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799249 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.799388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799727 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799509 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799549 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799567 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799602 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799598 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799597 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799630 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799653 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799659 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799674 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800140 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800155 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.799905 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.799887525 +0000 UTC m=+534.492552243 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800189444 +0000 UTC m=+534.492854132 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800248 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800213594 +0000 UTC m=+534.492878263 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800258746 +0000 UTC m=+534.492923464 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800277976 +0000 UTC m=+534.492942674 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800304 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800295277 +0000 UTC m=+534.492959965 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800319788 +0000 UTC m=+534.492984486 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800339328 +0000 UTC m=+534.493004036 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800365 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800357329 +0000 UTC m=+534.493021997 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.800374099 +0000 UTC m=+534.493038737 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.800400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.80039226 +0000 UTC m=+534.493056958 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800534 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800601 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800669 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800733 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800768 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800910 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.800947 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801147 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801169 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801195152 +0000 UTC m=+534.493859860 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801243 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801288 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801277095 +0000 UTC m=+534.493941823 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801318 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801356 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801345647 +0000 UTC m=+534.494010345 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801388 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801457 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801479681 +0000 UTC m=+534.494144389 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801556 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801571 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801593 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801616 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801603534 +0000 UTC m=+534.494268252 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801647 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801687 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801689 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801676936 +0000 UTC m=+534.494341734 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801729 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801736 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801725298 +0000 UTC m=+534.494389956 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801766 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801755578 +0000 UTC m=+534.494420346 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801876 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801860561 +0000 UTC m=+534.494525259 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801900 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801931 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801945 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801933534 +0000 UTC m=+534.494598222 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.801972 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.801961254 +0000 UTC m=+534.494626082 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.801649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.802018 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.802054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802387 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802427 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.802416117 +0000 UTC m=+534.495080945 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802475 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.802512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.80250155 +0000 UTC m=+534.495166278 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.904579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.904750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.904892 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905220 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.905150844 +0000 UTC m=+534.597815532 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905557 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905608 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905890 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.905925 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905944 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905932 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.905921426 +0000 UTC m=+534.598586074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.905998 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906067 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.906063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906102 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906090431 +0000 UTC m=+534.598755109 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: I0813 19:51:43.906137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906144 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906135682 +0000 UTC m=+534.598800340 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906152 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906161 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906155433 +0000 UTC m=+534.598820021 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906195 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906182614 +0000 UTC m=+534.598847312 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906203 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906246 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906234035 +0000 UTC m=+534.598898723 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906478 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906507363 +0000 UTC m=+534.599171991 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906767 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:43 crc kubenswrapper[4183]: E0813 19:51:43.906996 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:47.906983467 +0000 UTC m=+534.599648105 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.008591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009334 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.008717 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009528 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009230 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009505138 +0000 UTC m=+534.702169846 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009285 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.00958855 +0000 UTC m=+534.702253238 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009626 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009615681 +0000 UTC m=+534.702280369 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.009651 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.009641841 +0000 UTC m=+534.702306479 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009846 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009884 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009912 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.009983 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010023 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010084 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010146 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010155 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010174 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010179 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010192 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010193 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010203 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010184647 +0000 UTC m=+534.702849355 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010235 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010225678 +0000 UTC m=+534.702890296 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010102 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010238 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010251 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010244589 +0000 UTC m=+534.702909177 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010117 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010295 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010307 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010278 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010265479 +0000 UTC m=+534.702930147 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010370 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010415 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010431 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010447 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010439954 +0000 UTC m=+534.703104652 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010505 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010511 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010493286 +0000 UTC m=+534.703157974 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010523 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010534 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010542 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010530167 +0000 UTC m=+534.703194835 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010607 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010611 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010629 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01065774 +0000 UTC m=+534.703322438 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010699432 +0000 UTC m=+534.703364190 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010724 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010723102 +0000 UTC m=+534.703387870 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010758 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010749593 +0000 UTC m=+534.703414201 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.010764 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010713 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010894 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.010876997 +0000 UTC m=+534.703541765 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.010876 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011056 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011041971 +0000 UTC m=+534.703706649 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011212 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011195706 +0000 UTC m=+534.703860394 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011302 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011375 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01136069 +0000 UTC m=+534.704025358 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011400 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011509 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011527 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011561 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011561056 +0000 UTC m=+534.704225784 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011713 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.011764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.011751922 +0000 UTC m=+534.704416630 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011935 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.011980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012021 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012067 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012068 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012073 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01206059 +0000 UTC m=+534.704725298 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012117 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012107332 +0000 UTC m=+534.704771970 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012135 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012164 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012175 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012179 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012166653 +0000 UTC m=+534.704831381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012203 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012196654 +0000 UTC m=+534.704861342 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012241 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012257676 +0000 UTC m=+534.704922294 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012304 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012328 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012322468 +0000 UTC m=+534.704987156 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012361 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012410 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012412 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012435 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012428921 +0000 UTC m=+534.705093539 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012473 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012506893 +0000 UTC m=+534.705171601 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012532 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012546 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012557 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012587 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012577695 +0000 UTC m=+534.705242303 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012627 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012614906 +0000 UTC m=+534.705279614 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012635 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012663 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012654407 +0000 UTC m=+534.705319185 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012696 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012888 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012900 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.012939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012942 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.012933865 +0000 UTC m=+534.705598563 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.012982 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013015 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013007107 +0000 UTC m=+534.705671805 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013049 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013083829 +0000 UTC m=+534.705748517 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013127 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013174 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013197 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013190313 +0000 UTC m=+534.705854921 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013251 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013262 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013270 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013291375 +0000 UTC m=+534.705955993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013366 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013387 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013400 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013436 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013425259 +0000 UTC m=+534.706089977 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013467 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013529 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013562 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013555103 +0000 UTC m=+534.706219691 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013565 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013569893 +0000 UTC m=+534.706234491 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013606 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013625 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013637 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013649 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013660 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013668896 +0000 UTC m=+534.706333514 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013737 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013756 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013744408 +0000 UTC m=+534.706409076 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013764 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013898 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.013913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.013993 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014005 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.013928984 +0000 UTC m=+534.706593672 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014050 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014092 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014081038 +0000 UTC m=+534.706745716 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014140 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014010 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014174 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014263 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014280 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014305404 +0000 UTC m=+534.706970122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014140 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014366 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014356976 +0000 UTC m=+534.707021654 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014367 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014478 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014497 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014614 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014633 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014645 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014557 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014725 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014739 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014621833 +0000 UTC m=+534.707286571 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014764 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014753227 +0000 UTC m=+534.707417895 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014887 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.0148704 +0000 UTC m=+534.707535088 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.014938 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.014924932 +0000 UTC m=+534.707589640 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.014999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015028 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015077086 +0000 UTC m=+534.707741804 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015126 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015156739 +0000 UTC m=+534.707821487 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015220 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015237 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015249 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015259 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015286 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015274592 +0000 UTC m=+534.707939330 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015334 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015351 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015363 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015391265 +0000 UTC m=+534.708055943 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015403 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015433346 +0000 UTC m=+534.708098064 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015447 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015469 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015458777 +0000 UTC m=+534.708123475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015489 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.015480428 +0000 UTC m=+534.708145116 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015363 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015512 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015581 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015662 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.015724 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.015989 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016035 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016025603 +0000 UTC m=+534.708690311 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016066 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016088 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016101 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016128956 +0000 UTC m=+534.708793674 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016141 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016168 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016179 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016185 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016214 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016203238 +0000 UTC m=+534.708867946 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016089 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016239 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016227249 +0000 UTC m=+534.708891987 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016240 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016262 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.01625286 +0000 UTC m=+534.708917548 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016263 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016321 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016339 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016354 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016324232 +0000 UTC m=+534.708988950 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016397694 +0000 UTC m=+534.709062362 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016411 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.016472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.016458556 +0000 UTC m=+534.709123254 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.123741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.123987 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124249 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124305 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.124287828 +0000 UTC m=+534.816952456 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124379 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12435796 +0000 UTC m=+534.817022668 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.124761 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125120 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124845 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125623 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125643 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.124922 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125225 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125866 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125909 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125939 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125959 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125968 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125509 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.125692 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.125677157 +0000 UTC m=+534.818341855 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126033 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126022627 +0000 UTC m=+534.818687225 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126051 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126044608 +0000 UTC m=+534.818709196 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126067 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126061218 +0000 UTC m=+534.818725806 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126075259 +0000 UTC m=+534.818739857 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.125583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126381 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126524 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126560763 +0000 UTC m=+534.819225471 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126576 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126622 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126609974 +0000 UTC m=+534.819274682 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126622 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.126672 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.126655685 +0000 UTC m=+534.819320383 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126708 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126760 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126947 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.126980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127057 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127096 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127150 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127135009 +0000 UTC m=+534.819799777 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127202 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127218 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127236 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127272 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127259452 +0000 UTC m=+534.819924070 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127308 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127363 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127375 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127382 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127408 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127399346 +0000 UTC m=+534.820063964 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127445 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127471 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127465278 +0000 UTC m=+534.820129976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127523 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127555 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127565 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127553071 +0000 UTC m=+534.820217759 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127595 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.127625 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.127618283 +0000 UTC m=+534.820282971 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.127869 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128162 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128184 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128194 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128203 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128163 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128221 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128235 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128237 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128245 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128407 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128425 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128433 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128489 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128506 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128516 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.128739 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128841 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128826497 +0000 UTC m=+534.821491235 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128865 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128858638 +0000 UTC m=+534.821523226 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128881 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128873958 +0000 UTC m=+534.821538556 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128895 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128888779 +0000 UTC m=+534.821553447 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128897 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128911 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.128904169 +0000 UTC m=+534.821568757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128914 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128924 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12892109 +0000 UTC m=+534.821585678 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.128986 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12893712 +0000 UTC m=+534.821601708 (durationBeforeRetry 1m4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129043 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129070 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129127 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129169 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129202 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129221 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129206928 +0000 UTC m=+534.821871676 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129245 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129234969 +0000 UTC m=+534.821899657 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129251 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129259 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129256089 +0000 UTC m=+534.821920757 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129175 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129285 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.12927832 +0000 UTC m=+534.821943018 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129313 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129322 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129311801 +0000 UTC m=+534.821976529 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129361 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129374 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129377 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129383 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129413 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.129406084 +0000 UTC m=+534.822070782 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129549 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.129706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129915 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129933 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129942 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.129970 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130003 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130015 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130024 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130042 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130050 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130042282 +0000 UTC m=+534.822706890 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130076 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130066492 +0000 UTC m=+534.822731190 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130086993 +0000 UTC m=+534.822751591 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130110 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130103334 +0000 UTC m=+534.822768012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130113 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130116 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130140 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130154 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.130170 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130192 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130181116 +0000 UTC m=+534.822845794 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130208 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.130228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130231 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130225077 +0000 UTC m=+534.822889695 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130125 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130256 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130275148 +0000 UTC m=+534.822939756 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130328 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.130402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.130386452 +0000 UTC m=+534.823051220 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.208705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.208912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.209633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.209735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.210142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.231490 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.231622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231671 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231707 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231725 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231888 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.231862193 +0000 UTC m=+534.924527101 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231918 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231941 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.231985 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.231970316 +0000 UTC m=+534.924635074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.232307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232506 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232529 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232537 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: E0813 19:51:44.232569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:52:48.232559503 +0000 UTC m=+534.925224131 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.432911 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:44 crc kubenswrapper[4183]: I0813 19:51:44.433049 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.208944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.209997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.211889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.212754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.212930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.213018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.213579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.213738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.213883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.214958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.216714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.231951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243685 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.243734 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.250376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.260567 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270333 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270440 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270462 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270491 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.270527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.274134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.288459 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295272 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295332 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295396 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295420 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.295448 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.298981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.311313 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.314382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315935 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.315990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.316017 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.316042 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.331983 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.334511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337573 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337757 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.337969 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.338098 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.338277 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:45Z","lastTransitionTime":"2025-08-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.352708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.355406 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.355463 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.373092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.391013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: E0813 19:51:45.401894 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.409704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.427029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.432224 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.432541 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.444272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.460142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.484393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.502688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.523451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.541857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.559654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.573174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.592130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.610392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.627480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.648546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.669644 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.692235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.711597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.728160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.749468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.768486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.787670 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.806698 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.823186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.840522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.857940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.876660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.897585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.920332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.939978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.960026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.976559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:45 crc kubenswrapper[4183]: I0813 19:51:45.993377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.017355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.041465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.064493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.084460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.105455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.131559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.145699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.161960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.182054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.200722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.208494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.208674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:46 crc kubenswrapper[4183]: E0813 19:51:46.209516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.216250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.229922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.248552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.269145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.284604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.301477 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.326096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.341728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.362654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.382502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.399989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.418344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.433903 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.434038 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.435722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.450695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.470748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:46 crc kubenswrapper[4183]: I0813 19:51:46.494002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.210953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.210956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.211722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.211974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.212976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.212997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.213759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.213875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:47 crc kubenswrapper[4183]: E0813 19:51:47.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.438614 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:47 crc kubenswrapper[4183]: I0813 19:51:47.438950 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.209259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:48 crc kubenswrapper[4183]: E0813 19:51:48.211104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.432377 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:48 crc kubenswrapper[4183]: I0813 19:51:48.432483 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.052715 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.054254 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" exitCode=1 Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.054482 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2"} Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.055617 4183 scope.go:117] "RemoveContainer" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.080896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.111828 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.130881 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.153137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.171905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.188438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.210963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.212995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.213969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.213526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.214631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.214987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.215586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:49 crc kubenswrapper[4183]: E0813 19:51:49.216621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.236337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.252160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.274387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.293618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.309933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.327415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.339067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.356718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.382858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.414933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.435613 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.435738 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.443502 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.462735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.491399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.512191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.540731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.562040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.578684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.602039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.619953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.647290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.670030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.696913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.724296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.764759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.804118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.838000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.869325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.894078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.934757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:49 crc kubenswrapper[4183]: I0813 19:51:49.985716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.010727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.033229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.058686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.066295 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.066493 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2"} Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.108483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.136625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.162404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.201241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.210134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.210430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.212399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.212613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.214679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.215210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.215499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.215673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.252552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.316722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.382307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: E0813 19:51:50.404173 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.453266 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.453401 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.565462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.609289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.677552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.774159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.831854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.883166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.908097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.926107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.948769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:50 crc kubenswrapper[4183]: I0813 19:51:50.969102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.001579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.020651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.048407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.079964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.111499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.137208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.158627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.190165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.208755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210839 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.211272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.211976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.212970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:51 crc kubenswrapper[4183]: E0813 19:51:51.213133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.227696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.250311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.278588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.308426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.334742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.366728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.389535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.417989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.432934 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.433063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.445747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.474443 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.503261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.524507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.551055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.581610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.607166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.634153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.655177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.672956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.691144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.708276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.725610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.744946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.764432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.783451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.805557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.832052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.851701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.877731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.898050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.920959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.941588 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.960968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.979684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:51 crc kubenswrapper[4183]: I0813 19:51:51.998927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.015163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.032298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.060751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.076447 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.080920 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" exitCode=1 Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.081153 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561"} Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.083173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.084030 4183 scope.go:117] "RemoveContainer" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.102342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.121961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.148307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.169374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.193019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.208629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.210536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.210743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.209975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.210266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.211197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:52 crc kubenswrapper[4183]: E0813 19:51:52.211552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.221683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.240657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.258307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.276889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.307707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.336555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.363410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.390102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.415228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.440188 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.440447 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.446705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.470253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.493737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.522771 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.553383 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.576758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.604001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.630113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.654000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.677756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.702363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.722691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.744739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.781291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.803867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.822987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.841762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.864875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.894209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.918285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.941567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.962727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:52 crc kubenswrapper[4183]: I0813 19:51:52.989228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.019200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.088903 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.093708 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7"} Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.208948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.209971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.212988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.213519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.214662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.216146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:53 crc kubenswrapper[4183]: E0813 19:51:53.216296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.419603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.433087 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.433565 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.441102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.466958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.484898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.507150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.531946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.557615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.574454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.595513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.620263 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.643765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.661541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.679196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.699891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.719095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.736644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.753246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.779415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.798894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.813907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.829676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.848644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.867138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.884452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.901600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.917929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.934615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.957559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.975018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:53 crc kubenswrapper[4183]: I0813 19:51:53.988551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.003492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.019915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.045142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.073909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.109722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.157049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.191993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.208069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.208272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.208598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.210912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.211619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:54 crc kubenswrapper[4183]: E0813 19:51:54.211849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.231007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.269858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.316335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.352863 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.395509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431754 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.431947 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.469055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.513971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.554378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.591248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.628882 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670279 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670373 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670410 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670443 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.670463 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.677002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.708991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.749556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.788439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.827708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.869341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:54 crc kubenswrapper[4183]: I0813 19:51:54.909182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.051857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.073857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.091768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.103548 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.104318 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/0.log" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.110637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111265 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" exitCode=1 Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111326 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.111388 4183 scope.go:117] "RemoveContainer" containerID="07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.113452 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.114359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.128564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.150693 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.190205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.208910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.208984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.209930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.210770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.212570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.214617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.213025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.229070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.269118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.309346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.349660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.389738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.405084 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.428221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.431603 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.431712 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.470619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.509111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.549149 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.603315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.648403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672427 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672497 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672517 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.672538 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.676602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.689090 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694458 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694476 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.694525 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.710534 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.711687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715274 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715363 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.715407 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.729740 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734225 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.734267 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.748461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.749506 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754376 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754396 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.754428 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:51:55Z","lastTransitionTime":"2025-08-13T19:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.770551 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: E0813 19:51:55.770612 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.793354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.830858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.870129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.911955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.949662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:55 crc kubenswrapper[4183]: I0813 19:51:55.990308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.028434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.070402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.116354 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.118370 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.151098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.189539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.208847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.209985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.210166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:56 crc kubenswrapper[4183]: E0813 19:51:56.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.227890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.269856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.312263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.352152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.392237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.430765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.432892 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.432974 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.470358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.510332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.548723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.589165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.636142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.675128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.710190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.750476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.787998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.833890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.870929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.910554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.950076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:56 crc kubenswrapper[4183]: I0813 19:51:56.989745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.039128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.067853 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.108434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.148481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.191345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.210959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.210964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.211729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.212902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.212990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.213610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.213991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.214680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.214958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.215967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.216348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:57 crc kubenswrapper[4183]: E0813 19:51:57.216505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.234166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.269748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.309703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.351314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.393367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.430134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.433363 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.433466 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.473916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.512328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.552934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.591686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.631762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.671139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.715296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.748927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.791380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.828504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.867666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.910258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.952209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:57 crc kubenswrapper[4183]: I0813 19:51:57.990313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.029597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.071537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.110306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.151115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.189829 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.208723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:51:58 crc kubenswrapper[4183]: E0813 19:51:58.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.232644 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.233686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.269432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.309528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.348194 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.391381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.432032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.435206 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.435332 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.470307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.510678 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.546335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.589101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.629663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.672130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.711319 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.750960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.795613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.828649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.875345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.915763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.952986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:58 crc kubenswrapper[4183]: I0813 19:51:58.991605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.028182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.068754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.108430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.151392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.190051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208837 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208856 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.208978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.209980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.209982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.211693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.211871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.212768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.214475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:51:59 crc kubenswrapper[4183]: E0813 19:51:59.215134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.234294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.274943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.311017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.348750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.389389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.432355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.434100 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:51:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:51:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:51:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.434174 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.470411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.514230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.548702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.592005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.637684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.671538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.709341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.747923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.792326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.830866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.869021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.911656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.949686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:51:59 crc kubenswrapper[4183]: I0813 19:51:59.989656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:51:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.029718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.069555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.113996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.161651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.193280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.208720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.232660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:00 crc kubenswrapper[4183]: E0813 19:52:00.407592 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.437431 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:00 crc kubenswrapper[4183]: I0813 19:52:00.437677 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.169739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.186957 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.203493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.209887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210843 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.211914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.212682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.212997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.213331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.214934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:01 crc kubenswrapper[4183]: E0813 19:52:01.215100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.222402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.238596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.254125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:01Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.433438 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:01 crc kubenswrapper[4183]: I0813 19:52:01.433573 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.210612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:02 crc kubenswrapper[4183]: E0813 19:52:02.210836 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.433352 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:02 crc kubenswrapper[4183]: I0813 19:52:02.433474 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.209972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.209980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.210904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.212930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.213652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.213925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.214763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.214966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.215297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.215960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.216128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:03 crc kubenswrapper[4183]: E0813 19:52:03.216257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.433926 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:03 crc kubenswrapper[4183]: I0813 19:52:03.434116 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.208911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.209963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:04 crc kubenswrapper[4183]: E0813 19:52:04.210201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.433388 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:04 crc kubenswrapper[4183]: I0813 19:52:04.433530 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.210886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.210978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.211524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.212726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.212948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.213354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.214957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.215907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.216548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.241622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.269937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.291147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.310586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.326667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.346252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.363210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.381545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.400245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.411345 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.417429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.433748 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.433937 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.434654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.455425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.472571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.494716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.511286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.536543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.556634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.573265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.588031 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.605925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.622428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.639133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.654938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.676539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.701538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.719491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.738991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.757872 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.773712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.790734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.812314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.828025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.841756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.857351 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.883484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07752a5beb70c8c101afc3171b1a8e4c4e2212fc9939840b594a2736d0ab4561\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:51Z\\\",\\\"message\\\":\\\"etworkPolicy event handler 4 for removal\\\\nI0813 19:51:51.514559 14994 handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:51:51.514564 14994 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:51:51.514573 14994 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:51.514581 14994 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:51:51.514588 14994 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:51.514589 14994 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:51.514598 14994 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:51.514645 14994 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:51:51.514663 14994 handler.go:217] Removed *v1.NetworkPolicy event handler 4\\\\nI0813 19:51:51.514672 14994 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:51.514741 14994 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:51.514881 14994 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:51:51.514901 14994 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.935102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.944437 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.944942 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.945384 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:05Z","lastTransitionTime":"2025-08-13T19:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.959048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: E0813 19:52:05.977053 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.983836 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984156 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984287 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984379 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.984545 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:05Z","lastTransitionTime":"2025-08-13T19:52:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:05 crc kubenswrapper[4183]: I0813 19:52:05.987475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.009425 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015105 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015231 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015267 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.015291 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.020379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.028933 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033686 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033732 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033751 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.033858 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.038611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.049417 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.052929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054542 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054592 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.054617 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:06Z","lastTransitionTime":"2025-08-13T19:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.068530 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.070432 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.070487 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.085959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.099899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.119378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.148905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.169759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.185748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.200450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.208870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:06 crc kubenswrapper[4183]: E0813 19:52:06.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.216865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.237627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.253441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.269458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.289357 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.305318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.319150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.343453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.362951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.382658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.401025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.416378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.431650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.433563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.433682 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.449299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.464728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.481490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.496761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:06 crc kubenswrapper[4183]: I0813 19:52:06.511219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.210844 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.211661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.209197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.212876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.213675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.215704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.215989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.216959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.216436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.217088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.217343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:07 crc kubenswrapper[4183]: E0813 19:52:07.217756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.432160 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:07 crc kubenswrapper[4183]: I0813 19:52:07.432324 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.209894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.209947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:08 crc kubenswrapper[4183]: E0813 19:52:08.210297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.432478 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:08 crc kubenswrapper[4183]: I0813 19:52:08.432589 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.208941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.210980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.209758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.212988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:09 crc kubenswrapper[4183]: E0813 19:52:09.213361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.214453 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.235903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.253683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.268602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.283004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.305636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.320032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.348304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.365573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.386142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.404931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.425928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.433930 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.434051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.450073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.466041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.484876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.509271 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.533459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.551080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.569356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.585374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.610325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.635148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.655616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.677546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.693348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.717671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.733954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.759086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.792389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.812763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.837635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.855295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.870753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.892653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.909739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.925691 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.941728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:09 crc kubenswrapper[4183]: I0813 19:52:09.955310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.193066 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.198695 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa"} Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.199424 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.209051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.209532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.384450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:09Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: E0813 19:52:10.413605 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.415973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.434354 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.434495 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.435322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.463767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.490287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.513656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.531393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.559318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.576538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.595912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.612337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.630337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.650673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.671237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.691461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.711148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.729313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.745359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.762311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.780161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.800473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.815505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.838247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.855675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.873421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.890107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.910909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.930653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.947686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.964867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.980401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:10 crc kubenswrapper[4183]: I0813 19:52:10.997023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.012398 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.038504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.058439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.074053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.092033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.110944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.130460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.146314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.164420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.181987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.199167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.206704 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.207918 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/1.log" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.208759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.208834 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.208981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.209854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.209906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.210891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.211971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.212658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.213369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220688 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" exitCode=1 Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220724 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa"} Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.220755 4183 scope.go:117] "RemoveContainer" containerID="55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.222944 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:11 crc kubenswrapper[4183]: E0813 19:52:11.223746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.224423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.239353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.260542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.285055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.301059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.317102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.333940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.349865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.367731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.383535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.399153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.413553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.427553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.432442 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.432594 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.444611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.459377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.476690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.490626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.505916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.529938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.547010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.564254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.579887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.594465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.608076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.621930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.634400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.645167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.659323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.672023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.687119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.702248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.719733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.742523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.785341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.826745 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.865540 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.904915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.944458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:11 crc kubenswrapper[4183]: I0813 19:52:11.987002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.023663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.062628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.104674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.162176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55ec019d83cfecee513d99ac18e2ee82ef341831cf1ccbf84cdcde598bfcb6b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:55Z\\\",\\\"message\\\":\\\"3 16242 handler.go:203] Sending *v1.Node event handler 7 for removal\\\\nI0813 19:51:54.589848 16242 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:51:54.589868 16242 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:51:54.589895 16242 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:51:54.589924 16242 services_controller.go:231] Shutting down controller ovn-lb-controller\\\\nI0813 19:51:54.589937 16242 reflector.go:295] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:51:54.589952 16242 handler.go:203] Sending *v1.Node event handler 10 for removal\\\\nI0813 19:51:54.589975 16242 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:51:54.589985 16242 handler.go:217] Removed *v1.Node event handler 7\\\\nI0813 19:51:54.589996 16242 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:51:54.590680 16242 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:51:54.591579 16242 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.208750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.209852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.220029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.225077 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.231205 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:12 crc kubenswrapper[4183]: E0813 19:52:12.231753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.243493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.263920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.303742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.343108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.383571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.425073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.432973 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.433311 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.464596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.506033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.550945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.584192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.625323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.664291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.702715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.742888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.784249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.826128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.866561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.907159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.943165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:12 crc kubenswrapper[4183]: I0813 19:52:12.984025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.034256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.068620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.110215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.144326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.186159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.208718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.209961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.210991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.212175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.214675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.215962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:13 crc kubenswrapper[4183]: E0813 19:52:13.216375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.228249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.267215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.318565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.352450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.385065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.424902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.434553 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.434632 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.467043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.506485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.548500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.586552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.624309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.666883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.706643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.748034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.788673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.825458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.864231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.912549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.942434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:13 crc kubenswrapper[4183]: I0813 19:52:13.987846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:13Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.025168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.066442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.106490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.147068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.194562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.208727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:14 crc kubenswrapper[4183]: E0813 19:52:14.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.224109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.271373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.311935 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.345095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.390624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.424116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.432526 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.432615 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.464420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.503521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.545507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.592057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.626276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.663963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.703505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.744672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.783516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.824472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.864064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.905381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.948008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:14 crc kubenswrapper[4183]: I0813 19:52:14.992917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.038513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.064933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.104579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.145413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.198465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.209628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.211636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.212227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.212940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213841 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.213906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.213973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.214765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.214947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.215121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.215379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.215973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.216482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.216999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.217219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.227402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.268129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.309977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.349363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.390655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: E0813 19:52:15.415404 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.429276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.431279 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.431351 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.466036 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.507253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.544295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.590323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.624546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.665430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.703927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.746173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.790655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.824177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.865302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.905082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.946521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:15 crc kubenswrapper[4183]: I0813 19:52:15.992266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.024555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.067531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.107490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.146206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.190060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.209843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.210851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.210997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.211472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.231890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.273999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.305629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.346866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.386216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414183 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414204 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414229 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.414260 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.422942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.429501 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.433761 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.434145 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437038 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437076 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437088 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437109 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.437136 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.454608 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.459745 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460019 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460041 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.460107 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.466764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.477042 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.482659 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.482889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483021 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483137 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.483254 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.497658 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502267 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502326 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502363 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.502392 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:16Z","lastTransitionTime":"2025-08-13T19:52:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.510712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.517856 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: E0813 19:52:16.517912 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.545277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.584994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.624376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.665034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.704513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.744732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.787206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.824978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.865716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.906553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.944654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:16 crc kubenswrapper[4183]: I0813 19:52:16.984642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.026637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.071659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.110878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.154617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.193322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.210042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210772 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.210500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.210978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.211986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.212921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.213931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.213993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.214875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.214325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.215561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.216978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.217727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.217989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.218608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:17 crc kubenswrapper[4183]: E0813 19:52:17.219942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.231440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.269561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.310566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.350303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.386924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.428004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.432319 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.432423 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.466660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.505851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.548122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.586931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.631724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.668022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.707047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.747098 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.788624 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.826323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.873994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.908004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:17 crc kubenswrapper[4183]: I0813 19:52:17.950001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.209742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:18 crc kubenswrapper[4183]: E0813 19:52:18.211124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.432039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:18 crc kubenswrapper[4183]: I0813 19:52:18.432145 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.209741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.210751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.211722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.211904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.212643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.213881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.213942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.214888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.214935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:19 crc kubenswrapper[4183]: E0813 19:52:19.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.433021 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:19 crc kubenswrapper[4183]: I0813 19:52:19.433099 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.208959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.209479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:20 crc kubenswrapper[4183]: E0813 19:52:20.416675 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.432598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:20 crc kubenswrapper[4183]: I0813 19:52:20.432692 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.209958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.210944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.211943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.212367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.212949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.213097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.213944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:21 crc kubenswrapper[4183]: E0813 19:52:21.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.431557 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:21 crc kubenswrapper[4183]: I0813 19:52:21.431667 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.209413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.209634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:22 crc kubenswrapper[4183]: E0813 19:52:22.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.432638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:22 crc kubenswrapper[4183]: I0813 19:52:22.433195 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.210945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.210997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.211951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.211973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.212150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.212757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.214943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:23 crc kubenswrapper[4183]: E0813 19:52:23.215686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.432541 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:23 crc kubenswrapper[4183]: I0813 19:52:23.432657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.208907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:24 crc kubenswrapper[4183]: E0813 19:52:24.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.432433 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:24 crc kubenswrapper[4183]: I0813 19:52:24.432563 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.208913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.210994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.211150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.211368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.212242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.213893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.214252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.216566 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.217610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.226358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.242549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.299749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.341904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.359156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.375407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.390704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.409386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: E0813 19:52:25.417898 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.426634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.431590 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.431688 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.444429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.461537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.478427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.503501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.518568 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.532860 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.546481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.564679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.581764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.595706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.611643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.627545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.640945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.653988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.669505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.684758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.703379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.720294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.738580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.756591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.771551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.785974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.802235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.820982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.844721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.861896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.879449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.901580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.919540 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.935967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.952310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.970757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:25 crc kubenswrapper[4183]: I0813 19:52:25.987613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.005661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.040874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.056947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.071417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.087879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.100908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.117528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.137686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.152342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.169756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.184095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.200458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.208193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.208602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.208524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.218174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.238245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.258348 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.278322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.295976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.312292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.329156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.349715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.369468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.389175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.458509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.462533 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.462605 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.475948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.489750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.678978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679077 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679127 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.679154 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.695941 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701423 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701503 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701549 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.701580 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.714964 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.720902 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.721022 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.742042 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748221 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748300 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748354 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.748382 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.765415 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772596 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.772753 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.773066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:26 crc kubenswrapper[4183]: I0813 19:52:26.773111 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:26Z","lastTransitionTime":"2025-08-13T19:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.798253 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:26 crc kubenswrapper[4183]: E0813 19:52:26.798717 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.208401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.208754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.210666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.210872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.211920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.212012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.213468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.213905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.214970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:27 crc kubenswrapper[4183]: E0813 19:52:27.216468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.432599 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:27 crc kubenswrapper[4183]: I0813 19:52:27.432681 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.208875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.209576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:28 crc kubenswrapper[4183]: E0813 19:52:28.210577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.432746 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:28 crc kubenswrapper[4183]: I0813 19:52:28.432951 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.208710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.208977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.209864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.208532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.210945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.211519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.211857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.213539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.213663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.214194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.214935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.215891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.215961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.216730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.216761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.217649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.218705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:29 crc kubenswrapper[4183]: E0813 19:52:29.219115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.432152 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:29 crc kubenswrapper[4183]: I0813 19:52:29.432228 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.209563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.209953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.209184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:30 crc kubenswrapper[4183]: E0813 19:52:30.419375 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.432321 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:30 crc kubenswrapper[4183]: I0813 19:52:30.432414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.208980 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.208960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.209891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.211869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.211970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212834 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.212849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.213705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.213940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:31 crc kubenswrapper[4183]: E0813 19:52:31.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.432358 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:31 crc kubenswrapper[4183]: I0813 19:52:31.432514 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:32 crc kubenswrapper[4183]: E0813 19:52:32.211220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.432208 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:32 crc kubenswrapper[4183]: I0813 19:52:32.432295 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.210655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.211154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.212788 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.213700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.213932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.214917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.214974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.215719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.215893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.216873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.216924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.217256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217777 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:33 crc kubenswrapper[4183]: E0813 19:52:33.217921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.432982 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:33 crc kubenswrapper[4183]: I0813 19:52:33.433080 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:34 crc kubenswrapper[4183]: E0813 19:52:34.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.432071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:34 crc kubenswrapper[4183]: I0813 19:52:34.432196 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.208581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.210698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.210765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.211612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.213576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.213990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.214708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.214972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.215715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.230689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.253852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.272341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.294992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.316486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.345283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.367209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.386589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.406740 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: E0813 19:52:35.422091 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.434961 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.435098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.455720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.484301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.504250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.526163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.545954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.561206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.582883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.601440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.619163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.636193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.655635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.673654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.697355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.721909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.742057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.764238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.786316 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.808679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.834060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.851181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.869679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.889315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.912331 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.939384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.962990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:35 crc kubenswrapper[4183]: I0813 19:52:35.982439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.011493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.031604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.049744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.068270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.088919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.105469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.119708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.137208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.163378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.178411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.193207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.208835 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.208928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.209619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:36 crc kubenswrapper[4183]: E0813 19:52:36.210040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.211242 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.237267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.285341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.319504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.339747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.361300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.383338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.402186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.420719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.432356 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.432490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.440084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.464341 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.497535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.523495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.545963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.562734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.584371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.612921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.638333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.658049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.674476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.693032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:36 crc kubenswrapper[4183]: I0813 19:52:36.709945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176344 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176468 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.176536 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.196779 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205583 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205616 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205644 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.205894 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208777 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.208956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.208445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.209940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.211557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.211742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.213223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.230112 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.236917 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.236998 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237030 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237061 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.237097 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.256130 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266169 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266257 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.266363 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.292768 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303859 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303913 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.303961 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:37Z","lastTransitionTime":"2025-08-13T19:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.324874 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: E0813 19:52:37.324934 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.337735 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.342713 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf"} Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.343674 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.363228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.386204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.408880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.433285 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.433401 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.444023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.468489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.487294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.513565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.539382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.563409 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.587610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.604960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.624506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.646496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.665473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.683084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.701585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.718976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.737227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.754727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.775330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.794554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.810987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.831508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.846189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.862428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.879920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.895570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.911611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.926516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.942401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.960427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.979386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:37 crc kubenswrapper[4183]: I0813 19:52:37.998986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.016299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.033333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.050437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.070426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.090225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.110010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.127127 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.149472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.162708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.180225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.200313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.208538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.208952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.209473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.223468 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.242190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.265950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.286164 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.303585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.321033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.336490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.348697 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349385 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/0.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349487 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" exitCode=1 Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349571 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2"} Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.349612 4183 scope.go:117] "RemoveContainer" containerID="1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.350361 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.350946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.360041 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.363171 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/2.log" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.369945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.370756 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" exitCode=1 Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.370889 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf"} Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.372999 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:38 crc kubenswrapper[4183]: E0813 19:52:38.375539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.396916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.416534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.434054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.435514 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.435584 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.444347 4183 scope.go:117] "RemoveContainer" containerID="2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.457290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.474354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.489300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.504978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.520207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.534614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.552553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.567979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.582668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.599143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.619122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.635775 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.652259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.667767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.706336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.742013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.781196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.821572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.863521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.914224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.939955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:38 crc kubenswrapper[4183]: I0813 19:52:38.980985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:38Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.021846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.064911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.103720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.142686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.182336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.209476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.210729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.211954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.212904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.212921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.213086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.213768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.213956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.214683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.225661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.260868 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.300564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.343139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.376348 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.382404 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.383272 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:39 crc kubenswrapper[4183]: E0813 19:52:39.383985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.387456 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.422137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.432161 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.432261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.462661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.502527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.542061 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.583749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.623154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.664280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.702579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.742490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.784263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.819964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.862032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.902937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.944042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:39 crc kubenswrapper[4183]: I0813 19:52:39.984281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.023729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.062937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.101556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.141470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.182193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.211862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.211956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.212009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.212255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.222570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.271469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.303912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.340687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.382498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.420310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: E0813 19:52:40.423025 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.432362 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.432457 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.467180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.499089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.541457 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.580393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.622895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.663042 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.701558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.741302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.782293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.824763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.863089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.903084 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.944534 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:40 crc kubenswrapper[4183]: I0813 19:52:40.982268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.027964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.061227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.101132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.143663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.209946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.209446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.210710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.210983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.211746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.211973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.212211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.212995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.213927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.214214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.214749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:41 crc kubenswrapper[4183]: E0813 19:52:41.215040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.219652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b9499014ac6e90a7470da179079d21d771343cf59f1d9242bb4876b4f66e0aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:10Z\\\",\\\"message\\\":\\\"handler.go:203] Sending *v1.Namespace event handler 1 for removal\\\\nI0813 19:52:10.825320 16600 handler.go:203] Sending *v1.Namespace event handler 5 for removal\\\\nI0813 19:52:10.825330 16600 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:52:10.825339 16600 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:52:10.825369 16600 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:52:10.825371 16600 handler.go:217] Removed *v1.Namespace event handler 1\\\\nI0813 19:52:10.825412 16600 reflector.go:295] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:52:10.825423 16600 handler.go:217] Removed *v1.Namespace event handler 5\\\\nI0813 19:52:10.825382 16600 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0813 19:52:10.825464 16600 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nF0813 19:52:10.825509 16600 ovnkube.go:136] failed to run ovnkube: failed to start node network c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.241419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.260555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.300637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.343492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.382770 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.421583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.433422 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.433589 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.466744 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.501232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.540895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.582045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.622079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.669053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.701597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.741217 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.785021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.829952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.862883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.900544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.940912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:41 crc kubenswrapper[4183]: I0813 19:52:41.982482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.032242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.075143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.107989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.145901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.183904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.208943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:42 crc kubenswrapper[4183]: E0813 19:52:42.209847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.223855 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.265003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.304026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.342508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.384622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.421764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.433003 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.433136 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.467297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.500714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.543349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.581567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.620907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.660877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.702673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.741913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.786140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.821018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.862122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.901428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.940972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:42 crc kubenswrapper[4183]: I0813 19:52:42.980905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.024003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.062070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.104511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.142213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.178631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.210006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.209899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.211963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.220947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.221533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.221942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.222986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.223210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.223451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.224623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.226919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.227965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:43 crc kubenswrapper[4183]: E0813 19:52:43.228269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.234944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.262286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.307238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.340870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.386695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.423363 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432499 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:52:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:52:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:52:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432576 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.432620 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.433737 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.433910 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" gracePeriod=3600 Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.471203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.504175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.542438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.583471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.627254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.663307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.702337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.741944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.784352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.821666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.862193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.901605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:43 crc kubenswrapper[4183]: I0813 19:52:43.947467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:44 crc kubenswrapper[4183]: I0813 19:52:44.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:44 crc kubenswrapper[4183]: E0813 19:52:44.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.208944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.209837 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.210758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.210920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.212970 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.213609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.228109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.243759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.257682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.274879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.299505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.315998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.333233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.349437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.369088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.386205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.402717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.418604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: E0813 19:52:45.424749 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.435063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.450501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.466728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.484494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.506103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.526745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.544121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.560293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.576479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.592342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.606424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.623898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.640028 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.656033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.672996 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.689672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.707571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.727997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.744728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.764464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.781356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.798692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.813233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.829522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.847609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.871681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.891981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.909756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.926926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.940339 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.960178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.979178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:45 crc kubenswrapper[4183]: I0813 19:52:45.997160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.014919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.030926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.056042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.070508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.085050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.100600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.120101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.136747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.151555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.167132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.184219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259949c7cd0a729c140267bdc2500e4782e6aae9a8263b8af65823a76b255d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:51:48Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71\\\\n2025-08-13T19:51:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38d13af8-eb1d-4e37-ac69-d640fc974f71 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:03Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:03Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:51:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.209692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.209925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:46 crc kubenswrapper[4183]: E0813 19:52:46.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.221974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.262428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.303976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.341939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.384296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.426470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.463005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.506418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.541329 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.578547 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:46 crc kubenswrapper[4183]: I0813 19:52:46.621934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.208941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.209958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.209736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.210504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.211965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.212643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.212720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.214832 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.215558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512623 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.512968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.513100 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.529050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535748 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535910 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535934 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.535958 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.553158 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558619 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558668 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558683 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.558724 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.574415 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579446 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579561 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579588 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.579612 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.594950 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601542 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601662 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601683 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.601734 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:47Z","lastTransitionTime":"2025-08-13T19:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.617075 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.617146 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833413 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833517 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833634 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833667 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833703 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833909 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.833949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834123 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834210 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.834467 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.834564 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.834546662 +0000 UTC m=+656.527211290 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834595 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834632 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.834751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835047 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835108 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835096868 +0000 UTC m=+656.527761486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835161 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.83517882 +0000 UTC m=+656.527843438 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835238 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835268 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835260362 +0000 UTC m=+656.527924980 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835316 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835346 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835334735 +0000 UTC m=+656.527999353 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835367 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835396 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835418 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835407137 +0000 UTC m=+656.528071765 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835433 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835425587 +0000 UTC m=+656.528090205 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835480 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835498 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835509 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835498629 +0000 UTC m=+656.528163327 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835519 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835532 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835559 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835556891 +0000 UTC m=+656.528221509 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835581162 +0000 UTC m=+656.528245780 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835613 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835635 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835671 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835634823 +0000 UTC m=+656.528299491 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835680264 +0000 UTC m=+656.528344862 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835697 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835716745 +0000 UTC m=+656.528381363 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835879 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835906401 +0000 UTC m=+656.528571019 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.835977 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836005 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.835997893 +0000 UTC m=+656.528662511 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836177 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836192 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.836228 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.83621969 +0000 UTC m=+656.528884308 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839663 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.839998 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840018 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840108 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840128 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.84010544 +0000 UTC m=+656.532770178 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840160 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840148222 +0000 UTC m=+656.532813000 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840202 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840234 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840226214 +0000 UTC m=+656.532890892 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.840036 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840292 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.840363 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.840347067 +0000 UTC m=+656.533011775 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.841454 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845391 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.84537118 +0000 UTC m=+656.538035978 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.841251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.845860 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845907 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.845966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.845982 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.846010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846067 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846023979 +0000 UTC m=+656.538688597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846112 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.846128 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846153432 +0000 UTC m=+656.538818150 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846189 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846247 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846228075 +0000 UTC m=+656.538892803 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.846404 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.846388059 +0000 UTC m=+656.539052777 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948020 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948087 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948247 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948362 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948341411 +0000 UTC m=+656.641006159 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948448 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948432933 +0000 UTC m=+656.641097531 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948586 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948271 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948629239 +0000 UTC m=+656.641293857 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948682 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.948689 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948713 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948706071 +0000 UTC m=+656.641370689 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948748 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948877 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948765113 +0000 UTC m=+656.641429721 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948917 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948955 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.948946708 +0000 UTC m=+656.641611446 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.948976 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949008 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.949001269 +0000 UTC m=+656.641665887 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.949714 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:47 crc kubenswrapper[4183]: I0813 19:52:47.949904 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949914 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.949960 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.949947996 +0000 UTC m=+656.642612724 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.950001 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:52:47 crc kubenswrapper[4183]: E0813 19:52:47.950039 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:49.950030879 +0000 UTC m=+656.642695497 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051168 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051252 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051282 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051321 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051347 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051355 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051467 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051441145 +0000 UTC m=+656.744105753 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051471 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051516157 +0000 UTC m=+656.744180885 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051555 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051658 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051683 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051695 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051730 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.051722343 +0000 UTC m=+656.744387081 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.051760 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.051955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052166 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052227 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052213467 +0000 UTC m=+656.744878215 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052226 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052265 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052252658 +0000 UTC m=+656.744917246 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052297 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052282939 +0000 UTC m=+656.744947677 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052167 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052326 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052328 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052340 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052347 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052353 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052359 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052387 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052378722 +0000 UTC m=+656.745043330 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052407 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052400002 +0000 UTC m=+656.745064590 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052427 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052462 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052468 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052455724 +0000 UTC m=+656.745120432 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052534 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052518746 +0000 UTC m=+656.745183424 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052553 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052544236 +0000 UTC m=+656.745208884 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052310 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052594 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052606 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052642 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052631639 +0000 UTC m=+656.745296357 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052246 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052672 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052684 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.052721 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.052709461 +0000 UTC m=+656.745374169 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.052939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053009 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053020 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053051 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05304065 +0000 UTC m=+656.745705268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053090 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053131 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053119443 +0000 UTC m=+656.745784151 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053132 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053155 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053168 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053169 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053190 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053199 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053192045 +0000 UTC m=+656.745856853 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053092 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053233 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053222586 +0000 UTC m=+656.745887334 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053373 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053470 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053487 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053495 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053517 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053508314 +0000 UTC m=+656.746173112 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053543 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053552 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053564 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053573 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053605 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053624 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053637 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053641708 +0000 UTC m=+656.746306316 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053675 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053690 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053717 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053710459 +0000 UTC m=+656.746375268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053733 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05372581 +0000 UTC m=+656.746390518 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053858 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053886 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053899 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053901125 +0000 UTC m=+656.746566023 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053953 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053971 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.053972 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053961097 +0000 UTC m=+656.746625915 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054003 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.053991417 +0000 UTC m=+656.746656106 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054011 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054031 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054020818 +0000 UTC m=+656.746685486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.053863 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054064 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054053 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054041739 +0000 UTC m=+656.746706537 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054084 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054097 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054192 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054195 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054224 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054205694 +0000 UTC m=+656.746870362 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054254 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054267 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054275 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054298946 +0000 UTC m=+656.746963694 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054330 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054317477 +0000 UTC m=+656.746982195 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054363 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054384 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054403 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054410 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054459 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05444543 +0000 UTC m=+656.747110118 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054492 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054505 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054513 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054537283 +0000 UTC m=+656.747202021 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054652 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054693 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054684927 +0000 UTC m=+656.747349545 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054750 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054858 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054870 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054897 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054882423 +0000 UTC m=+656.747547121 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054926 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054941 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054958 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.054950025 +0000 UTC m=+656.747614643 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.054989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.054991 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055025 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055017657 +0000 UTC m=+656.747682255 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055032 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055052 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055074498 +0000 UTC m=+656.747739086 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055085 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055105 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055124 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055127 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05512119 +0000 UTC m=+656.747785808 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055155 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055163 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055152041 +0000 UTC m=+656.747816729 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055191 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055195 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055241 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055249283 +0000 UTC m=+656.747913961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055242 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055280 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055271414 +0000 UTC m=+656.747936072 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055301 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055328 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055352 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055374 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055393 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055399 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055404 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055413 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055439 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055428488 +0000 UTC m=+656.748093176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055459 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055471 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055479 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055460 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055450469 +0000 UTC m=+656.748115117 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055512 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055505261 +0000 UTC m=+656.748169849 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055523 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055542 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055553 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055519611 +0000 UTC m=+656.748184199 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.055592043 +0000 UTC m=+656.748256721 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.055749 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.055975 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056102 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056282 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056298 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.056306 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057081 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057106 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057091736 +0000 UTC m=+656.749756354 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057135 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057129047 +0000 UTC m=+656.749793645 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057151 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057144567 +0000 UTC m=+656.749809165 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057169 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057188 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057228 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057215689 +0000 UTC m=+656.749880387 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057249 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057269571 +0000 UTC m=+656.749934289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057314 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057351 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057341133 +0000 UTC m=+656.750005831 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057377 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057383 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057400 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057394344 +0000 UTC m=+656.750058952 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057450 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057472 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057465906 +0000 UTC m=+656.750130514 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057505 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057511 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057527 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057521288 +0000 UTC m=+656.750185896 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057571 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057601 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057612 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05759978 +0000 UTC m=+656.750264498 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057632 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057623531 +0000 UTC m=+656.750288209 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057648 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057669 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057663382 +0000 UTC m=+656.750327990 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057695 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057702 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057712 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057724 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057428 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057572 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057718324 +0000 UTC m=+656.750382952 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057876 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057771795 +0000 UTC m=+656.750436433 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.057900 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.057890248 +0000 UTC m=+656.750554906 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.057959 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058028 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.058091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058168 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058196 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058188837 +0000 UTC m=+656.750853455 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058238 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058262 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058255079 +0000 UTC m=+656.750919697 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058295 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058306 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058318 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.05831096 +0000 UTC m=+656.750975558 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058354 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058344461 +0000 UTC m=+656.751009089 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058357 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.058383 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.058377552 +0000 UTC m=+656.751042170 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159673 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159733 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159861 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159879 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.159933 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.159974 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.159953133 +0000 UTC m=+656.852617871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160056 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160073 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160085 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160137 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160122398 +0000 UTC m=+656.852787016 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160061 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160216 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160384 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160401 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160415 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160428 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160438 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160461 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160476 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160484 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160497 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160404996 +0000 UTC m=+656.853069594 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160540 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16053091 +0000 UTC m=+656.853195498 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160552 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160388 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160554 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16054813 +0000 UTC m=+656.853212718 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160596 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160588171 +0000 UTC m=+656.853252759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160613 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160606452 +0000 UTC m=+656.853271120 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160837 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160825658 +0000 UTC m=+656.853490346 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160874 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160919 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.160966 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.160994 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161006 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.160996343 +0000 UTC m=+656.853661061 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161038 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161059 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161073 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161080 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161119 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161131 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161133 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161139 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161171 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161161908 +0000 UTC m=+656.853826536 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161208 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161234 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16122758 +0000 UTC m=+656.853892268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161341 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161372 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161385 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161445 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161422015 +0000 UTC m=+656.854086773 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161456 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161469 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161478 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161502 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.161495647 +0000 UTC m=+656.854160265 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161548 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161574 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161628 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.161964 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161973 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162047 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162053 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162063 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162097 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162087214 +0000 UTC m=+656.854751842 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162133 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162146 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162155 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162189 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162180547 +0000 UTC m=+656.854845165 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162222 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162247 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162355 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162364 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162396 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162405 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162394913 +0000 UTC m=+656.855059631 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162451 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162705 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162712 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162721 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162732 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162745 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162761 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162752873 +0000 UTC m=+656.855417581 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.162856 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162894 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162910 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162918 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162947 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.162938358 +0000 UTC m=+656.855603076 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.162984 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163007 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.1630009 +0000 UTC m=+656.855665518 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163007 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163034 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163028701 +0000 UTC m=+656.855693399 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163047 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163075 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163081 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163082962 +0000 UTC m=+656.855747750 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163116 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163105873 +0000 UTC m=+656.855770531 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163130 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163126314 +0000 UTC m=+656.855790902 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.161082 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163150484 +0000 UTC m=+656.855815102 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163178 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163171245 +0000 UTC m=+656.855835933 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163194 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163212 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163215276 +0000 UTC m=+656.855880004 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163237 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163230867 +0000 UTC m=+656.855895575 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163251 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163269 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163277 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163269368 +0000 UTC m=+656.855934066 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163294 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163285908 +0000 UTC m=+656.855950616 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163309 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163322 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163327219 +0000 UTC m=+656.855991827 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163053 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163349 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.16334246 +0000 UTC m=+656.856007168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163355 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163367 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163392 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163385961 +0000 UTC m=+656.856050569 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163401 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163416 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163442 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163451 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163442143 +0000 UTC m=+656.856106761 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163456 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163466 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163483484 +0000 UTC m=+656.856148162 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163526 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163551 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163545526 +0000 UTC m=+656.856210144 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163594 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163604 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163614 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163637 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163631288 +0000 UTC m=+656.856295896 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.163934 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.163922136 +0000 UTC m=+656.856586864 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.209721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.210714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.265051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265289 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265336 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265350 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.265449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.265423635 +0000 UTC m=+656.958088343 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.266701 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:48 crc kubenswrapper[4183]: I0813 19:52:48.266898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267261 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267323 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267583 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267694 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.267669979 +0000 UTC m=+656.960334777 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267438 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267728 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:48 crc kubenswrapper[4183]: E0813 19:52:48.267769 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:54:50.267758041 +0000 UTC m=+656.960422719 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.209951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.210985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.211867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.211929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.212254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.212994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:49 crc kubenswrapper[4183]: I0813 19:52:49.213316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.214236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:49 crc kubenswrapper[4183]: E0813 19:52:49.214377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.210910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:50 crc kubenswrapper[4183]: I0813 19:52:50.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.211965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:50 crc kubenswrapper[4183]: E0813 19:52:50.427233 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.210892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.211936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.211979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.212396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.212765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.213062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:51 crc kubenswrapper[4183]: I0813 19:52:51.214770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.214982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:51 crc kubenswrapper[4183]: E0813 19:52:51.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.209873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:52 crc kubenswrapper[4183]: I0813 19:52:52.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:52 crc kubenswrapper[4183]: E0813 19:52:52.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.209921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.210510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.210868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.211865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.211975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.212879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.213914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.214973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.215127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.215661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.221724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.222903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.223183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.223953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.225690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.226245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.226710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.227608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.228634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.228893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.229561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:53 crc kubenswrapper[4183]: E0813 19:52:53.230050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.230120 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.272729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.291619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.309554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.353377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.374133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.391290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.412137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.432056 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.450312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.469314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.493533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.528063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.548427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.580703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.597874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.624158 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.644935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.660446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.678441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.704178 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.723474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.747950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.766311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.784418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.802502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.827606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.887004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.909603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.932961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.956100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:53 crc kubenswrapper[4183]: I0813 19:52:53.984708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.007942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.036349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.060298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.084656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.104630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.124106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.147000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.180891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.204590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.208872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.209105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.209343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.209487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.210386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211362 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.211142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.210965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.212583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:54 crc kubenswrapper[4183]: E0813 19:52:54.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.249073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.272723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.325278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.358963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.384407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.438616 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.451481 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.451620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb"} Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.465028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.487264 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.512259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.534289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.557134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.580708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.602307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.621275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.643473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.660925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671428 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671592 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671620 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671648 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.671669 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.680214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.697441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.717333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.739209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.755713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.776009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.796024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.814228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.833608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.858330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.872556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.901065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.917405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.939248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.962034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:54 crc kubenswrapper[4183]: I0813 19:52:54.986637 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:54Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.004638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.056963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.073019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.100671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.121120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.136025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.154190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.208968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.210674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.211892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.211940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.212904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.212979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.213972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.214079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.214166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.223465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.245891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.276459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.294213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.311184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.325979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.342197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.360517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.378013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.397463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.414562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: E0813 19:52:55.428270 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.430239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.446119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.464652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.481684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.497160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.515621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.530951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.546912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.560681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.576488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.590894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.606186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.622268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.641249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.655300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.670146 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.687590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.700914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.719965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.733304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.750473 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.767552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.783418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.806909 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.847632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.891223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.926932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:55 crc kubenswrapper[4183]: I0813 19:52:55.967972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.009327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.052226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.088056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.128925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.168055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:56 crc kubenswrapper[4183]: E0813 19:52:56.210887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.217453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.250113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.287005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.340495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.366313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.409898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.448664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.488369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.527413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.568209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.606303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.645419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.686023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.727454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.768714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.809265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.851656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.890551 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.927907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:56 crc kubenswrapper[4183]: I0813 19:52:56.968662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.007645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.048984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.085914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.126509 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.167512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.208956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.209356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.210696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.211729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.211959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.212896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.212956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.213672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.213909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.214863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.214920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.215752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.215979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.216992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.217088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.217183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.221431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.248209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.288145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.325022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.369307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.407085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.445620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.487062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.526150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.565870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.622015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.649319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.694467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.726227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.767050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.816111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825128 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825224 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.825280 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.844592 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.849662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851331 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851403 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851474 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.851506 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.867424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.872876 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873086 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873106 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.873133 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.890447 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.892430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895277 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.895429 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.911046 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917452 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917496 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.917525 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:52:57Z","lastTransitionTime":"2025-08-13T19:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.930628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.934081 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:57 crc kubenswrapper[4183]: E0813 19:52:57.934152 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:52:57 crc kubenswrapper[4183]: I0813 19:52:57.974238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:57Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.064854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.080330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.125372 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.146370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.167079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.208356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.208602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.208981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.209664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.210328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:52:58 crc kubenswrapper[4183]: E0813 19:52:58.210656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.248416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.292560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.345142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.369067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.412622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.451603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.489734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.529354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.567941 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.616097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.650751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.691277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.728302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.769409 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.808077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.848614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.888267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.929602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:58 crc kubenswrapper[4183]: I0813 19:52:58.972584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:58Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.012287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.048247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.087204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.127933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.167383 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.206258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.209742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.209936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.210988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.211867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.211953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.212482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.212940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.213463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.213726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:52:59 crc kubenswrapper[4183]: E0813 19:52:59.214249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.251501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:52:59 crc kubenswrapper[4183]: I0813 19:52:59.288162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.209476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.209762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.209988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:00 crc kubenswrapper[4183]: I0813 19:53:00.210394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.211181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.211344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:00 crc kubenswrapper[4183]: E0813 19:53:00.430758 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.210716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.210948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.211882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.211944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.212893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.212696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.213761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.213884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.214265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.214979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.215258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.215881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.216936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:01 crc kubenswrapper[4183]: I0813 19:53:01.217357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.217587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:01 crc kubenswrapper[4183]: E0813 19:53:01.218944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.208890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:02 crc kubenswrapper[4183]: I0813 19:53:02.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:02 crc kubenswrapper[4183]: E0813 19:53:02.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.211935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.212897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:03 crc kubenswrapper[4183]: I0813 19:53:03.213443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.213621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.214978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.215587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.216946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.217669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.218500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.218629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:03 crc kubenswrapper[4183]: E0813 19:53:03.220023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:04 crc kubenswrapper[4183]: I0813 19:53:04.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:04 crc kubenswrapper[4183]: E0813 19:53:04.210695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.208859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.209729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.209969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.210748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.211694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.211935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.212031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.213413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.213872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.214733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.215129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.232749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.249553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.265949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.288121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.303250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.325324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.346055 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.364583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.385025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.400961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.420532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: E0813 19:53:05.432664 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.438473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.456304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.474210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.502157 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.523232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.542231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.559857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.577721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.595336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.610971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.628535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.644528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.660739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.687394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.706478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.726959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.745459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.761669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.793258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.810717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.826241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.844393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.865729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.880658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.905939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.924566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.943441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.958690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.976536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:05 crc kubenswrapper[4183]: I0813 19:53:05.991988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.009247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.029199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.051684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.074064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.091026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.108384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.127712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.161603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.176522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.193566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208854 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.208969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:06 crc kubenswrapper[4183]: E0813 19:53:06.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.226697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.247553 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.262937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.280143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.297460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.314589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.329411 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.344491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.363875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.384139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.399159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.416480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.431613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.446542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:06 crc kubenswrapper[4183]: I0813 19:53:06.465060 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.209954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.211379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.212433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.213145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.213689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.214999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.215462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.215645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.216002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.216246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.216700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.217137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.217521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.217700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.218090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.218567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.218942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.219512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.219960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.220171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.221482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:07 crc kubenswrapper[4183]: I0813 19:53:07.221573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.221997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.222882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:07 crc kubenswrapper[4183]: E0813 19:53:07.223037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112494 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112560 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112579 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.112629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.127077 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133096 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133115 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133137 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.133163 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.149139 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154626 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154671 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.154695 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.170357 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175049 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175276 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.175547 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.194226 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199900 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.199980 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.200001 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.200092 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:08Z","lastTransitionTime":"2025-08-13T19:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:08 crc kubenswrapper[4183]: I0813 19:53:08.212317 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.212972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.219237 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:08Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:08 crc kubenswrapper[4183]: E0813 19:53:08.219362 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.210562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.210867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.211568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.212548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.212970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.213392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.213759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.214190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.214707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.214964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.215074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.210277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.216642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.216686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.216869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.217886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.217972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.218580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.218778 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.219165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.219771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.219975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.220232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.220859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.220986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.221887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.221974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.222741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:09 crc kubenswrapper[4183]: I0813 19:53:09.222762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.224671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:09 crc kubenswrapper[4183]: E0813 19:53:09.223733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.144518 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.144678 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.208509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.208775 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.208901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:10 crc kubenswrapper[4183]: I0813 19:53:10.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:10 crc kubenswrapper[4183]: E0813 19:53:10.434340 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.209916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.210756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.210960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.212893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.212983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.213771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:11 crc kubenswrapper[4183]: I0813 19:53:11.213881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.214906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:11 crc kubenswrapper[4183]: E0813 19:53:11.215618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.208988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.208990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:12 crc kubenswrapper[4183]: I0813 19:53:12.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:12 crc kubenswrapper[4183]: E0813 19:53:12.209927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.209590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.210899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.210983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.211892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.211999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:13 crc kubenswrapper[4183]: I0813 19:53:13.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.212962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:13 crc kubenswrapper[4183]: E0813 19:53:13.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:14 crc kubenswrapper[4183]: I0813 19:53:14.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:14 crc kubenswrapper[4183]: E0813 19:53:14.209583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.208360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.210547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.211952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.212921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.213135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.213907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.213958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.214857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.215985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.216941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.217689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.232917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.257095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.275741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.294017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.311263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.326082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.350167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.367321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.386701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.406995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.424198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: E0813 19:53:15.436517 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.443739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.475644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.490565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.504499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.521441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.546015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.564356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.581878 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.601415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.619904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.636727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.651462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.670842 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.687560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.705231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.722704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.746117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.764567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.779374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.793760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.812122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.829014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.844001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.858650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.874405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.892251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.911169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.927621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.944868 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.962649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.979042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:15 crc kubenswrapper[4183]: I0813 19:53:15.996574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.012300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.026681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.043512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.058980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.075251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.094188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.110110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.131981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.149296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.164255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.182059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.196450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.209908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:16 crc kubenswrapper[4183]: E0813 19:53:16.210220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.217025 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.233035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.249264 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.266023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.282951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.303166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.318633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.338112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.356717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.376128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.405346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:16 crc kubenswrapper[4183]: I0813 19:53:16.420631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.208959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.209560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.209958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.210654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.211467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.211928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.212721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.212955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:17 crc kubenswrapper[4183]: I0813 19:53:17.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:17 crc kubenswrapper[4183]: E0813 19:53:17.213860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.209715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.209949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.210463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.620166 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.620735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.621382 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.621985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.622493 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.651260 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659754 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.659968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.660001 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.683285 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692271 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692395 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692680 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.692976 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.709458 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716134 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716282 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.716598 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.731537 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737392 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737532 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737635 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.737765 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:18 crc kubenswrapper[4183]: I0813 19:53:18.738116 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:18Z","lastTransitionTime":"2025-08-13T19:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.752496 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:18 crc kubenswrapper[4183]: E0813 19:53:18.752555 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.208682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.208988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.210659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.211696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.211960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.212924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:19 crc kubenswrapper[4183]: I0813 19:53:19.214505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.214769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.215055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:19 crc kubenswrapper[4183]: E0813 19:53:19.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.208661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.208983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:20 crc kubenswrapper[4183]: I0813 19:53:20.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.209634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:20 crc kubenswrapper[4183]: E0813 19:53:20.438349 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.210995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.211477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.212735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.213864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.213944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:21 crc kubenswrapper[4183]: I0813 19:53:21.214981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.214999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:21 crc kubenswrapper[4183]: E0813 19:53:21.215172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.208944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.209957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.210455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:22 crc kubenswrapper[4183]: E0813 19:53:22.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.211475 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.567394 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.573178 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137"} Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.573927 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.593181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.607752 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.622102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.648006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.698183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.727766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.752717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.781315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.806877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.830051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.847684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.865368 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.882685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.905244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.923713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.936009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.952129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.967511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:22 crc kubenswrapper[4183]: I0813 19:53:22.984148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.003141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.024410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.041828 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.065370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.083555 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.103218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.125183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.144210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.163094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.180890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.199082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.208670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.209970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.210953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.211891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.211974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.212882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.213429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.213437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.214705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.215915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:23 crc kubenswrapper[4183]: E0813 19:53:23.216241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.222706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.250925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.268285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.288220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.310289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.328407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.347659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.365228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.382364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.404866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.420067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.449433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.477099 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.502895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.520673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.539114 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.565991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.588006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.608159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.635103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.655737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.675089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.694536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.718288 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.736921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.756496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.772937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.789085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.816278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.835668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.851892 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.867883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.888283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.905277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.923323 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.944177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:23 crc kubenswrapper[4183]: I0813 19:53:23.964361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:23Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.209658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.209989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.210374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.583497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.584634 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/3.log" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589535 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" exitCode=1 Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589610 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137"} Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.589659 4183 scope.go:117] "RemoveContainer" containerID="ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.591641 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:24 crc kubenswrapper[4183]: E0813 19:53:24.592274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.611151 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.630162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.650660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.670662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.690138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.711723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.728917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.752108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.772436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.791573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.810438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.825256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.842180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.865908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.889759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.905930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.925144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.941462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.968054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:24 crc kubenswrapper[4183]: I0813 19:53:24.993271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.013392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.031716 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.051455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.069413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.095557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.113754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.130412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.148394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.171521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.190070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.206906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.209436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209854 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.211905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.211999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.212132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.213621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.214926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.215408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.236087 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.253641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.268482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.285117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.304254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.322188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.342421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.364543 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.384643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.406592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.422127 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.435988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: E0813 19:53:25.440076 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.457238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.475270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.495190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.510984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.526122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.541479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.557306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.571163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.588255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.595319 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.613377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.627767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.644326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.661423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.683956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.703188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.718349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.742963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.762330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.778538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.799734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.829227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.868714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.912340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.948941 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:25 crc kubenswrapper[4183]: I0813 19:53:25.988226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.033058 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.070414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.109327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.149852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.193585 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.208930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:26 crc kubenswrapper[4183]: E0813 19:53:26.209159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.232056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.269389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.311014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.353926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.389309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.426273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.468221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.514729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.552699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.592610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.631348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.668980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.707527 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.747072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.787953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.826284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.867460 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.936428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:26 crc kubenswrapper[4183]: I0813 19:53:26.990071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.007532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.025362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.066572 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.110043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.153537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.188630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.210936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.211973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.211985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.212766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.212925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.213635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.213748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.213926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.214508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.215028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.214664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.214862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:27 crc kubenswrapper[4183]: E0813 19:53:27.216710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.227563 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.269975 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.307988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.346944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.389491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.430501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.469200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.509052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.549620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.586342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.629426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.667841 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.708647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.753720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.788652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.827499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.866369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.909362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.947887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:27 crc kubenswrapper[4183]: I0813 19:53:27.991220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:27Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.028611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.069532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.108221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.150567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.186164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.208906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:28 crc kubenswrapper[4183]: E0813 19:53:28.209422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.230106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.269551 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.307370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.348529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.390113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.429726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.469704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.509945 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.545067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.589112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:28 crc kubenswrapper[4183]: I0813 19:53:28.625949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077248 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.077989 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.078215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.078358 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.092331 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097414 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097465 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097500 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.097527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.111095 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115706 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.115885 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.116049 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.129256 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133742 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133881 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.133942 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.146308 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.150990 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.151009 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.151029 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:29Z","lastTransitionTime":"2025-08-13T19:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.165069 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.165121 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.208880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.211962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.212145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.211972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.212946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.212985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:29 crc kubenswrapper[4183]: E0813 19:53:29.214160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.623087 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" exitCode=0 Aug 13 19:53:29 crc kubenswrapper[4183]: I0813 19:53:29.623611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839"} Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.208912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.208985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.209364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.209638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:30 crc kubenswrapper[4183]: E0813 19:53:30.441591 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.630111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.650906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.675255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.693444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.709315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.732295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.780334 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.799883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.817633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.832910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.850378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.867869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.885717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.903042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.920897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.940193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.957944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.975079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:30 crc kubenswrapper[4183]: I0813 19:53:30.993986 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.009763 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.024521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.045401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.062640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.082641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.102222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.127310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.144765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.163260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.183273 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.205641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.212948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.212714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.213704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.214676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.214948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.215958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.216915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.217061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:31 crc kubenswrapper[4183]: E0813 19:53:31.217155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.229367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.246833 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.265301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.288169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.302377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.318284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.340349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.364733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.385035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.401403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.416662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.430462 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.431706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.437912 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.438012 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.446452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.464122 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.479497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.496936 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.517951 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.535666 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.550668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.567720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.581755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.606953 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.624174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.643075 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.670693 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.689319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.704929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.718165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.733629 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.758219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.778196 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.797734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.820394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.838613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.860536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.878112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.894491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:31 crc kubenswrapper[4183]: I0813 19:53:31.911205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.209695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.210990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:32 crc kubenswrapper[4183]: E0813 19:53:32.211296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.432020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:32 crc kubenswrapper[4183]: I0813 19:53:32.432567 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.208699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.208902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.208510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.209768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.210959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.211511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.211956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.212053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.212109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.209686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.212976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.213046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:33 crc kubenswrapper[4183]: E0813 19:53:33.213434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.432657 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:33 crc kubenswrapper[4183]: I0813 19:53:33.432750 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.209971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:34 crc kubenswrapper[4183]: E0813 19:53:34.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.432582 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:34 crc kubenswrapper[4183]: I0813 19:53:34.432909 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.208921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.208587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.210990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.211887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.211911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.213634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.228976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.246680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.264431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.281380 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.305188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac356ad4260c40da4d4c53d998ba30d5e01808ef1a071b15b66988d2df3aeecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:37Z\\\",\\\"message\\\":\\\".4\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0813 19:52:37.663652 17150 metrics.go:552] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0813 19:52:37.664114 17150 ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:52:37Z is after 2024-12-26T00:46:02Z\\\\nI0813 19:52:37.663319 17150 services_controller.go:421] Built service openshift-kube-apiserver/apiserver cluster-wide LB []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver/apiserver_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver/apiserver\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.86\\\\\\\", Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.324247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.339954 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.356597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.370849 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.393555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.410696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.427482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.432627 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.433086 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.443071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: E0813 19:53:35.443253 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.459449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.476096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.500226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.515552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.529081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.551895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.569436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.586751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.603408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.620765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.637101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.657719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.672039 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.690353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.707181 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.724412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.746962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.769400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.790167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.805204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.821278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.841564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.862171 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.877699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.894078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.913498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.931602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.950005 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.970697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:35 crc kubenswrapper[4183]: I0813 19:53:35.990188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.009600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.054363 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.082422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.117853 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.136869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.156267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.175321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.195651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.209606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.210254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:36 crc kubenswrapper[4183]: E0813 19:53:36.210526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.216507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.239172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.257266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.278303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.293213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.311450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.330961 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.351692 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.373683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.392943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.409003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.429216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.433245 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.433347 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.467128 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.483283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.501753 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:36 crc kubenswrapper[4183]: I0813 19:53:36.521272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.208877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.210755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.211206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.211688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.212537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.212944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.213757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.213890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.214325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.214688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.215986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:37 crc kubenswrapper[4183]: E0813 19:53:37.216433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.433091 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:37 crc kubenswrapper[4183]: I0813 19:53:37.433234 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.208859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.209162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:38 crc kubenswrapper[4183]: E0813 19:53:38.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.432553 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:38 crc kubenswrapper[4183]: I0813 19:53:38.432689 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.208937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.209965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211038 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.210996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.212928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.212947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.213743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.215095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.215567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.231024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.245045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.261434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.276633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.301151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.317028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.333623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.348741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.367248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.384065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.401651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.417090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427537 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427555 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.427580 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.432836 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.432948 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.437111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.443272 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448082 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448191 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.448214 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.456699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.463185 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468328 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468672 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.468908 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.469149 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.469440 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.475478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.485313 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.490333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.492504 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.492869 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.493911 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.508022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.510746 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516420 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516489 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516539 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.516571 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:39Z","lastTransitionTime":"2025-08-13T19:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.527094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.538002 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: E0813 19:53:39.538601 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.542631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.558296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.571119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.594431 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.611651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.626397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.648604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.664625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.681279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.695379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.712153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.733239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.746960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.760668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.790297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.815297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.842519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.867012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.882724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.898521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.915112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.934385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.952227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.968554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.983660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:39 crc kubenswrapper[4183]: I0813 19:53:39.999704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:39Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.013880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.037096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.052019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.067281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.081054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.099048 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.116529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.134155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.144190 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.144297 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.150093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.169843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:52:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.184050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.197481 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.208957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.214938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.236241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.257580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.281956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.300995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.318871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.336729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.351079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.373371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.389739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.406613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.430230 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.432939 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.433033 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.444450 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.666554 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667349 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/1.log" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667412 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" exitCode=1 Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667440 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb"} Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667474 4183 scope.go:117] "RemoveContainer" containerID="9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.667995 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:53:40 crc kubenswrapper[4183]: E0813 19:53:40.668458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.817399 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.833153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.854704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.870102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.895697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.911390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.927015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.942995 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.958510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.972748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:40 crc kubenswrapper[4183]: I0813 19:53:40.987283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:40Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.002603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.017928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.031162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.048549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.065241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.080681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.098948 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.112276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.129425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.146903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.166703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.187548 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.205905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.209529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.210917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.211903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.212865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.212971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.213490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.213944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.214629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.214854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.215184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.215667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:41 crc kubenswrapper[4183]: E0813 19:53:41.216719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.233630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.253210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.270092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.291469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.308630 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.327316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.343225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.361917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.380121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.397201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.418045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.433110 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.433545 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.440924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.457607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.475497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.491204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.510182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.545711 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.583944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.623729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.661893 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.674930 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.705079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.753198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.783123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.822335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.861983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.907293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.944270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:41 crc kubenswrapper[4183]: I0813 19:53:41.982765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.023703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.064041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.105425 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.140655 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.182237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.208956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.209368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:42 crc kubenswrapper[4183]: E0813 19:53:42.210663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.226494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.395062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.417003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.433565 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.433719 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.441649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.483633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.523251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.546872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.564833 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.581658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:42 crc kubenswrapper[4183]: I0813 19:53:42.599446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:42Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.209956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.211911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.212218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.212903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.213564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.213595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.213977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:43 crc kubenswrapper[4183]: E0813 19:53:43.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.436268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:43 crc kubenswrapper[4183]: I0813 19:53:43.436381 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:44 crc kubenswrapper[4183]: E0813 19:53:44.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.433166 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:44 crc kubenswrapper[4183]: I0813 19:53:44.433303 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.208232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.210988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.211576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.211645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.212771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.213496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.213566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.214026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.214564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.214952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.215751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.216119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.228506 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.244413 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.266523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.282344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.301419 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.317484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.343094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.366623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.390910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.412466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.428976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.432900 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.432971 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.444519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: E0813 19:53:45.445495 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.462054 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.483249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.536918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.553242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.567225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.583327 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.600851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.617265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.632915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.649493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.665401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.680595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.703715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.722916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.739844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.758872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.778887 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.799107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.818013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.839415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.857183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.874746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.892990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.913007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.938071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.957247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.976381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:45 crc kubenswrapper[4183]: I0813 19:53:45.994336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.010962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.026618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.041672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.057057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.071375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.087459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.110382 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.139623 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.163200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.180862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.200285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.208331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:46 crc kubenswrapper[4183]: E0813 19:53:46.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.223042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.244713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.263180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.281184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.303077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e3050a2c27f17717b863b50ca89a0ed01ab622a6dfd0fddb97c205ab6a852d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:52:38Z\\\",\\\"message\\\":\\\"2025-08-13T19:51:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615\\\\n2025-08-13T19:51:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_70623c4d-4c49-4b7a-b073-745520179615 to /host/opt/cni/bin/\\\\n2025-08-13T19:51:53Z [verbose] multus-daemon started\\\\n2025-08-13T19:51:53Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:52:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.319318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.335462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.352247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.371636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.393350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.412632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.432418 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.432567 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.433664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.451459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.482964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.504596 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:46 crc kubenswrapper[4183]: I0813 19:53:46.528661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.208708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.210095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.209997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.211961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.212203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.213565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.214443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.214502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.214864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.215486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:47 crc kubenswrapper[4183]: E0813 19:53:47.215744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.432346 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:47 crc kubenswrapper[4183]: I0813 19:53:47.432469 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.208694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.208915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:48 crc kubenswrapper[4183]: E0813 19:53:48.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.432761 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:48 crc kubenswrapper[4183]: I0813 19:53:48.433012 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.209682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.209869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.210038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.209466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.210986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.211770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.211830 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.212915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.213504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.214909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.215740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.432345 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.432468 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597393 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597506 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597543 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.597563 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.619535 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.624933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625030 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625050 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625070 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.625101 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.639740 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645557 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645577 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645658 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.645694 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.662090 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667611 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667687 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.667756 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.680742 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684915 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684964 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684978 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.684999 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:49 crc kubenswrapper[4183]: I0813 19:53:49.685021 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:49Z","lastTransitionTime":"2025-08-13T19:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.698982 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:49 crc kubenswrapper[4183]: E0813 19:53:49.699034 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.209380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.209680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.209915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.210732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.211014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.211189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.433245 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:50 crc kubenswrapper[4183]: I0813 19:53:50.433396 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:50 crc kubenswrapper[4183]: E0813 19:53:50.447537 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.209963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.210976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.211972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.212895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.213002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.214598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.214973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.215184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:51 crc kubenswrapper[4183]: E0813 19:53:51.215592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.433111 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:51 crc kubenswrapper[4183]: I0813 19:53:51.433306 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.209359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.210090 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:52 crc kubenswrapper[4183]: E0813 19:53:52.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.246218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.273189 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.297541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.315907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.336407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.356375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.385124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.417574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.435138 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.435236 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.442150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.460014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.475110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.498992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.521125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.544454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.561002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.580187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.606467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.622969 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.638573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.663627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.682266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.698573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.713371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.731418 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.748446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.764586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.780340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.798676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.821154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.840497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.858177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.876713 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.897604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.919472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.938545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.958184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.974657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:52 crc kubenswrapper[4183]: I0813 19:53:52.991260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.007552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.024227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.041917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.060233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.078692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.095654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.114681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.132080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.147188 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.164064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.181695 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.196662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.208767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.209924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210606 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.212932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.212998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.213547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.213972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.214002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:53 crc kubenswrapper[4183]: E0813 19:53:53.214589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.219682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.234186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.249949 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.276276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.293737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.307024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.324685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.340397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.360625 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.383583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.403511 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.422232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.433879 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.434484 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.441560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.458694 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.476080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.496085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:53 crc kubenswrapper[4183]: I0813 19:53:53.514714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.208691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.208950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:54 crc kubenswrapper[4183]: E0813 19:53:54.209648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.433266 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.433358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.672919 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673057 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673077 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:53:54 crc kubenswrapper[4183]: I0813 19:53:54.673144 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.208704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.208589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.209732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.209943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.210568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.210963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211839 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.211898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.211997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.212989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213846 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.213901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.214982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.215055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.215153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.238121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.255378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.272524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.289503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.305168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.324147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.341375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.360200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.375966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.393325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.435869 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.436416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.437696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: E0813 19:53:55.449085 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.462850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.494154 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.513435 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.529023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.546387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.562010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.577598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.592148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.605024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.621065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.635968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.654699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.673109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.694539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.709601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.727622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.745645 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.760696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.778891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.797894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.820044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.837558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.851065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.867307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.893494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.911947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.928402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.945108 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.964266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.981029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:55 crc kubenswrapper[4183]: I0813 19:53:55.998336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.015261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.032421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.049283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.072432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.088004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.103649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.121709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.136165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.161507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.181985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.197316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.211420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:56 crc kubenswrapper[4183]: E0813 19:53:56.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.217469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.234617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.249469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.264258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.285044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.301282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.317289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.332663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.349009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.374611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.395767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.416526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.433386 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.433965 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.434475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:56 crc kubenswrapper[4183]: I0813 19:53:56.451153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.208636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.208935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.209933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.210966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.210995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.211901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.212493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.212656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.212985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.213851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:57 crc kubenswrapper[4183]: E0813 19:53:57.214702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.431877 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:57 crc kubenswrapper[4183]: I0813 19:53:57.432015 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.209384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.210708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:53:58 crc kubenswrapper[4183]: E0813 19:53:58.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.433730 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:58 crc kubenswrapper[4183]: I0813 19:53:58.434297 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.208950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.208979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.209954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.210985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.211760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.211937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.212226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.212380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.212868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.433137 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:53:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:53:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:53:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.433251 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978168 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978240 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978261 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978284 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:53:59 crc kubenswrapper[4183]: I0813 19:53:59.978318 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:53:59Z","lastTransitionTime":"2025-08-13T19:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:53:59 crc kubenswrapper[4183]: E0813 19:53:59.997328 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:53:59Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002438 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002510 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002531 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002556 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.002589 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.017464 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022185 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022245 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.022300 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.037334 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042236 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.042747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.043131 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.043354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.058106 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063026 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063344 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063524 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063673 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.063949 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:00Z","lastTransitionTime":"2025-08-13T19:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.078984 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:00Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.079331 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.208388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.208641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.208921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.209770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.210013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.210204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.432038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:00 crc kubenswrapper[4183]: I0813 19:54:00.432153 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:00 crc kubenswrapper[4183]: E0813 19:54:00.451444 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.209898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.211903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212263 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.212928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.212963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.213037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.211570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.214019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.214036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.214922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.215107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:01 crc kubenswrapper[4183]: E0813 19:54:01.216364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.433735 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:01 crc kubenswrapper[4183]: I0813 19:54:01.434016 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.208425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.208733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.209960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:02 crc kubenswrapper[4183]: E0813 19:54:02.210142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.433762 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:02 crc kubenswrapper[4183]: I0813 19:54:02.434000 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209763 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.209920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.209999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.210990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.211860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.211920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.212869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.212951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:03 crc kubenswrapper[4183]: E0813 19:54:03.213635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.434344 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.434988 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.775722 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.775967 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791"} Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.803302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.821177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.842350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.860978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.879569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.899942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.918966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.936621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.953349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.969879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:03 crc kubenswrapper[4183]: I0813 19:54:03.989463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.013992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.030651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.047588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.063650 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.078645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.092414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.109317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.123153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.142874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.158244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.177134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.196000 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.208736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.208906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.208982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:04 crc kubenswrapper[4183]: E0813 19:54:04.209452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.217651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.234592 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.252427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.268109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.285023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.300514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.318412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.334358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.351593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.368405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.386606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.401922 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.424902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.431854 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.431983 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.443429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.460615 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.475599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.489891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.511979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.532745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.549638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.563896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.584192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.611929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.630848 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.649600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.671898 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.689530 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.704682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.727601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.747214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.764618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.794921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.811659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.829037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.845755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.864724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.887374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.912999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.938070 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.964206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:04 crc kubenswrapper[4183]: I0813 19:54:04.987029 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.008924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.048697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.071536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.209551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.209897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.209942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.210704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.210290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.212941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.213938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.214303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.214501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.214977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.215103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.215138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.215870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.216133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.218362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.219550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.220294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.234314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.256088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.283576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.299983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.316689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.341612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.363210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.380325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.397248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.413567 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.432658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.434240 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.434383 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.453270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: E0813 19:54:05.453517 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.470353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.487686 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.503990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.519461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.533579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.548224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.565100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.581076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.592064 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.607745 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.622422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.640858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.655570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.678692 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.720408 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.744215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.775118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.797041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.814911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.831856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.847576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.863379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.879451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.912946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.950900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:05 crc kubenswrapper[4183]: I0813 19:54:05.991453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.032447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.079265 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.109344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.149366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.201227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.208975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.209960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:06 crc kubenswrapper[4183]: E0813 19:54:06.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.231338 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.276045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.314026 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.358928 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.389035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.430025 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.432278 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.432376 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.480965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.512121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.556944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.593513 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.633299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.669519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.707970 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.750714 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.792924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.830983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.871963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.908731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.950109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:06 crc kubenswrapper[4183]: I0813 19:54:06.993932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.031844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.075245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.112490 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.149844 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:07Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.209988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210850 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.211926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.211988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.212992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.212912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.213522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.213863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.215287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.216770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.217030 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.214067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.217188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:07 crc kubenswrapper[4183]: E0813 19:54:07.218044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.433453 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:07 crc kubenswrapper[4183]: I0813 19:54:07.434455 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.209701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209950 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.210752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.211624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:08 crc kubenswrapper[4183]: E0813 19:54:08.212228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.434946 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:08 crc kubenswrapper[4183]: I0813 19:54:08.435084 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.209629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.209930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.211872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.212170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.213968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.213982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.214753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.214977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.215706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.215634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.216933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:09 crc kubenswrapper[4183]: E0813 19:54:09.217918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.432705 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:09 crc kubenswrapper[4183]: I0813 19:54:09.432882 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143707 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143881 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.143938 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.144597 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.144897 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" gracePeriod=600 Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.209607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.210645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.211541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308269 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308359 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308450 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.308566 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.326145 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332704 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.332919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.336453 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.336518 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.357702 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.363927 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364339 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364359 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364386 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.364421 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.382043 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397303 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397748 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.397973 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.398139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.398349 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.415828 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422164 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422246 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422284 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.422311 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:10Z","lastTransitionTime":"2025-08-13T19:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.433273 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.433357 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.441424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.441485 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:10 crc kubenswrapper[4183]: E0813 19:54:10.455729 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810166 4183 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" exitCode=0 Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.810292 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.847565 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.873044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.896125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.915257 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.934393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.958094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.976658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:10 crc kubenswrapper[4183]: I0813 19:54:10.997966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.032262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.048311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.069555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.086538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.109033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.135406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.156672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.176003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.197306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.209968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210848 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.210901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.211859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.211915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.212481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.212979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.213324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.213999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.214085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.218852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:11 crc kubenswrapper[4183]: E0813 19:54:11.219132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.223001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.241735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.259621 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.275697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.291483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.307681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.323854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.350850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.369439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.387483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.411092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.431332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.434505 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.435068 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.451426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.466193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.483766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.501406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.518467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.536583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.551408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.570010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.590057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.608920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.624312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.641483 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.655473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.671871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.690397 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.706352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.721870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.736900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.752144 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.769432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.785206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.802281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.819192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.835712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.851680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.867529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.884369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.900742 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.918433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.932581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.950350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.971433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:11 crc kubenswrapper[4183]: I0813 19:54:11.992175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.007187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.025746 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.039994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.055988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.074367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.208519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.208536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.208650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.209738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:12 crc kubenswrapper[4183]: E0813 19:54:12.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.432897 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:12 crc kubenswrapper[4183]: I0813 19:54:12.432992 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.208660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.209862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.209891 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.210697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.210763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.211849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.211898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.212983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.213218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.213955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:13 crc kubenswrapper[4183]: E0813 19:54:13.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.433466 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:13 crc kubenswrapper[4183]: I0813 19:54:13.438703 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.209968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.208965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:14 crc kubenswrapper[4183]: E0813 19:54:14.211235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.433232 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:14 crc kubenswrapper[4183]: I0813 19:54:14.433414 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.208967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.209975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.210869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.211757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.211901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.212770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.212873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.213975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.214380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.215124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.215252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.432766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.433117 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:15 crc kubenswrapper[4183]: E0813 19:54:15.457098 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.948590 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.970729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:15 crc kubenswrapper[4183]: I0813 19:54:15.989282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.005768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.024998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.041209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.129289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.145290 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.161627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.177245 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.193900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.206700 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.208937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.209279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:16 crc kubenswrapper[4183]: E0813 19:54:16.209322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.225336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.242202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.261068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.279284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.296508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.315717 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.332078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.348731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.371182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.387672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.404528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.419910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.433461 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.433592 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.435762 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.459647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.483350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.506649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.525901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.543177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.561967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.579687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.596668 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.612126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.628544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.653100 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.678081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.697242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.712910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.727298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.746461 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.764908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.782877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.801199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.820496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.838348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.862140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.894723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.933093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:16 crc kubenswrapper[4183]: I0813 19:54:16.955356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.072663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.089741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.107102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.126165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.145950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.163647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.184104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.202262 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.208713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.208958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.209141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.209283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.210627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.210984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.211164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.211296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.220751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.220891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.220988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221039 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.221652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.221731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.222505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.222715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.222975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.224559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.224626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.223383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.223641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.227690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.227979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.228641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.228914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:17 crc kubenswrapper[4183]: E0813 19:54:17.229343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.239184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.255593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.273195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.289897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.308697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.327454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.343876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.362854 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.433025 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:17 crc kubenswrapper[4183]: I0813 19:54:17.433160 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.208924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.208986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:18 crc kubenswrapper[4183]: E0813 19:54:18.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.432766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:18 crc kubenswrapper[4183]: I0813 19:54:18.432929 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.209986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.210952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.212066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.212150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.212185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.213973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.213983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.214874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.214944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.215041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.216048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:19 crc kubenswrapper[4183]: E0813 19:54:19.216153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.435375 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:19 crc kubenswrapper[4183]: I0813 19:54:19.435480 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.208695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.208972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.209156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.209296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.433025 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.433180 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.459145 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705929 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705968 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.705985 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.706007 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.706032 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.724535 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.729937 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730024 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730046 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730069 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.730097 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.751424 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.756916 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757003 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757024 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757050 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.757089 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.773216 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.780641 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.780890 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781013 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781142 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.781255 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.801999 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809563 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809578 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:20 crc kubenswrapper[4183]: I0813 19:54:20.809629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:20Z","lastTransitionTime":"2025-08-13T19:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.824236 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:20 crc kubenswrapper[4183]: E0813 19:54:20.824658 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.209729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.210335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.213766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.213947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.214196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.214445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.215733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.216735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.216857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.217465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.217584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.218245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.218541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.218999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.219714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.219994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.220277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220391 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.220651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.220938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.222074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.222906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.223203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.223737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.224418 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.225139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:21 crc kubenswrapper[4183]: E0813 19:54:21.224490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.432309 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:21 crc kubenswrapper[4183]: I0813 19:54:21.432416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.208763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.209433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.209688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.209962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.210405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:22 crc kubenswrapper[4183]: E0813 19:54:22.210693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.433421 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:22 crc kubenswrapper[4183]: I0813 19:54:22.433528 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.210762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.210919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.211728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.211929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.212906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.212964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:23 crc kubenswrapper[4183]: E0813 19:54:23.214434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.433753 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:23 crc kubenswrapper[4183]: I0813 19:54:23.433921 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.208690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.208870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:24 crc kubenswrapper[4183]: E0813 19:54:24.209340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.432268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:24 crc kubenswrapper[4183]: I0813 19:54:24.432355 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208232 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.208607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.208871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.209032 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.209875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.210941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.210958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.211162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.211767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.212344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.212945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.213254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.230134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.248533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.264479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.285660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.303173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.326573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.366940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.401501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.422077 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.432471 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.432600 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.440014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.456889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: E0813 19:54:25.461004 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.472347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.493680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.511343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.527449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.540308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.556926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.570211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.584470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.598524 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.619271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.634931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.655973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.673994 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.690758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.708883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.725130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.743404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.760254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.775733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.790392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.813140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.830011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.846862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.861042 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.875979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.893098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.908426 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.929269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.944564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.959600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.978018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:25 crc kubenswrapper[4183]: I0813 19:54:25.996040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.013049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.027542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.041978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.069309 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.095303 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.109062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.128023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.145220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.160576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.177680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.194514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.209371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.209651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.209897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:26 crc kubenswrapper[4183]: E0813 19:54:26.211137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.213477 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.230062 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.246238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.261600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.279484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.299522 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.318516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.335201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.352749 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.368535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.386111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.402332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.417747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.433402 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:26 crc kubenswrapper[4183]: I0813 19:54:26.433887 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.208703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.209305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.209341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.210271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.216354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.216635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.216915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.217406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.217996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.218288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.218568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.218929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.219000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.219169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.219466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.220649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.220907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221578 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.221859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.221743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.222894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.222951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.223408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.223974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:27 crc kubenswrapper[4183]: E0813 19:54:27.224975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.432437 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:27 crc kubenswrapper[4183]: I0813 19:54:27.432510 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.209514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.209842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.210584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.211107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:28 crc kubenswrapper[4183]: E0813 19:54:28.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.432638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:28 crc kubenswrapper[4183]: I0813 19:54:28.432855 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.210904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211111 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.209192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.211938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.212524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.212599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.213986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215558 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.215726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.216754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:29 crc kubenswrapper[4183]: E0813 19:54:29.217009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.433510 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:29 crc kubenswrapper[4183]: I0813 19:54:29.433741 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.210656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.210714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.211387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.212363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.213176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.213895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.214069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.433722 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:30 crc kubenswrapper[4183]: I0813 19:54:30.433901 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:30 crc kubenswrapper[4183]: E0813 19:54:30.462856 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033020 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033108 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033128 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.033152 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.050258 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056305 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056339 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056355 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056373 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.056401 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.071383 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.076711 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077010 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077213 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077384 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.077577 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.093739 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099145 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099212 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099250 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.099270 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.113961 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119626 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119677 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119692 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.119732 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:31Z","lastTransitionTime":"2025-08-13T19:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.133861 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.134301 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.210725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.210961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.211645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.211998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.212504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.213480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.213989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.214579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:31 crc kubenswrapper[4183]: E0813 19:54:31.216046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.433716 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:31 crc kubenswrapper[4183]: I0813 19:54:31.434761 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:32 crc kubenswrapper[4183]: E0813 19:54:32.210251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.433040 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:32 crc kubenswrapper[4183]: I0813 19:54:32.433194 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.208935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.209717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.210981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.212544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.212969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.213640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:33 crc kubenswrapper[4183]: E0813 19:54:33.214503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.433034 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:33 crc kubenswrapper[4183]: I0813 19:54:33.433302 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.210751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.211232 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:34 crc kubenswrapper[4183]: E0813 19:54:34.211682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.432038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:34 crc kubenswrapper[4183]: I0813 19:54:34.432175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.209902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.210755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.212960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.213565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.231367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.259403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.279430 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.306653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.345733 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.376635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.399440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.414891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.429115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.431071 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.431150 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.445895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.461369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: E0813 19:54:35.464272 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.479367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.497942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.515475 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.532403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.547528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.566078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.587104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.604306 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.619053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.634533 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.651601 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.669017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.687095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.707145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.728680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.745283 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.763651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.779627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.795344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.818673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.831405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.848366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.863890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.880017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.897599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.910502 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.933138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.946708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.965636 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.983686 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:35 crc kubenswrapper[4183]: I0813 19:54:35.999723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.018199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.032680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.049943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.070090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.085296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.103914 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.119738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.136328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.150407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.167647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.183476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.207498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.208933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.209104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:36 crc kubenswrapper[4183]: E0813 19:54:36.209366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.228186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.246442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.261852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.277960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.303487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.327849 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.352335 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.369874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.394766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.432518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.433012 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.433157 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.454684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.472545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:36 crc kubenswrapper[4183]: I0813 19:54:36.494317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.210360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.209244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.210998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.211686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.212052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.212969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.213563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.213677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.213763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.214637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.214672 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.215893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216849 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:37 crc kubenswrapper[4183]: E0813 19:54:37.216931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.432700 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:37 crc kubenswrapper[4183]: I0813 19:54:37.432893 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.210301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:38 crc kubenswrapper[4183]: E0813 19:54:38.210646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.434304 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:38 crc kubenswrapper[4183]: I0813 19:54:38.434764 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.209996 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.210945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.211675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.212919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.213758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.214035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.214120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.214203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.215288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.215478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.213960 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.215982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:39 crc kubenswrapper[4183]: E0813 19:54:39.216706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.432882 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:39 crc kubenswrapper[4183]: I0813 19:54:39.433301 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.208722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.208964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.209887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.432324 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:40 crc kubenswrapper[4183]: I0813 19:54:40.432462 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:40 crc kubenswrapper[4183]: E0813 19:54:40.465764 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.209498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.209767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.209969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.210636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.210686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.211690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.212430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.213026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.213488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.213689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.214510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.214710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.215322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.215496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.215952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.216518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.216976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.217574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.217639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.217841 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.218133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.218168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.218910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.219630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.220704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.435116 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.435243 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502069 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502140 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502160 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.502219 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.522002 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526909 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526933 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526959 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.526986 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.542164 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547568 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547887 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547938 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.547972 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.548000 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.608295 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614594 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614715 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614756 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.614865 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.629391 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636401 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636502 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636529 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636556 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:41 crc kubenswrapper[4183]: I0813 19:54:41.636584 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:41Z","lastTransitionTime":"2025-08-13T19:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.654760 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:41Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:41 crc kubenswrapper[4183]: E0813 19:54:41.654994 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.209652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.209957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:42 crc kubenswrapper[4183]: E0813 19:54:42.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.433538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:42 crc kubenswrapper[4183]: I0813 19:54:42.433638 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.209316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.209678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.210975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.211914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.212354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.212731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213487 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.213922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.214214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.214642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215339 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:43 crc kubenswrapper[4183]: E0813 19:54:43.215652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.431848 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:43 crc kubenswrapper[4183]: I0813 19:54:43.431962 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.208461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.209634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210486 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:44 crc kubenswrapper[4183]: E0813 19:54:44.210570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.435268 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:44 crc kubenswrapper[4183]: I0813 19:54:44.435394 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.211512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.211695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.212108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.212498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.212668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.213998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.214760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.214970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.215922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.216687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.216176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.217408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.217739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.218388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.219936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.219997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.220574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.220688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.221131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.221213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.221292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.221897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.222016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.223938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224556 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.224925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.224989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.225149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.225873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.226938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.227179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.227746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.240007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.259224 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.275105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.290325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.307423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.323207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.342074 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.370459 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.387924 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.402115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.421449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.431642 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.432233 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.438197 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.455730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: E0813 19:54:45.468173 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.474267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.490929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.507429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.521314 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.538859 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.555219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.569369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.589336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.612362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.632632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.648393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.663047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.678079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.694226 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.709116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.723651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.738885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.754187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.769606 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.786263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.799613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.815253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.830166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.846436 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.859958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.877492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.898599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.914895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.931482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.950313 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.966261 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:45 crc kubenswrapper[4183]: I0813 19:54:45.996191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.023186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.049970 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.072034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.094562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.119466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.138111 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.153682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.168901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.187256 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.205375 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.208674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.208974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:46 crc kubenswrapper[4183]: E0813 19:54:46.209978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.211161 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.230252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.258103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.278343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.299250 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.321347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.352887 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.375028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.393393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.412304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.432079 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.432192 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.434107 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.454652 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.470021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.979183 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.983368 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5"} Aug 13 19:54:46 crc kubenswrapper[4183]: I0813 19:54:46.984354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.004075 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.022911 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.041708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.059232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.076089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.094130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.114271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.139764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.159564 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.177680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.198244 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208702 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208838 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.208955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.209901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.209965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.210842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.210955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.211759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.211933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.212904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:47 crc kubenswrapper[4183]: E0813 19:54:47.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.385702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.407160 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.426706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.431948 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.432059 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.443756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.460142 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.476657 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.494628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.517651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.539526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.556764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.576728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.601147 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.618215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.634440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.650510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.666098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.682414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.699579 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.713272 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.731917 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.746511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.764588 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.785187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.801677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.820296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.840186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.857902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.878992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.901394 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.918627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.939003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.963913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.981263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:47Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.990280 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.991066 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/4.log" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996318 4183 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" exitCode=1 Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996483 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5"} Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.996545 4183 scope.go:117] "RemoveContainer" containerID="419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137" Aug 13 19:54:47 crc kubenswrapper[4183]: I0813 19:54:47.999114 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.004133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.007433 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.050322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.067002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.082673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.102605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.124526 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.148231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.170532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.187756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.204730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:48 crc kubenswrapper[4183]: E0813 19:54:48.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.224150 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.240943 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.269389 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.284709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.300043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.316237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.369308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.386124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.424071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.434069 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.434370 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.463229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.506307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.541843 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.583871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.625697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.667690 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.702519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.746164 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.783633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.823565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.863562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.903219 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.944529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:48 crc kubenswrapper[4183]: I0813 19:54:48.985494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:48Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.003689 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.010720 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.011307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.028365 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.064174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.104760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.141538 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.183514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.208762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.208973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209839 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.210939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.212208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.212310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.212911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.214523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.231071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.267169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.307087 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.344234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.387432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.423284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.432149 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.432526 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.465195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.511085 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.544317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.597975 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.624935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.668350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.701641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.744235 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.791191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.830885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419806224cd6c0a59f1840c4646176b965fcb9ec1bd31aa759d37bc257e52137\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:24Z\\\",\\\"message\\\":\\\"094 reflector.go:295] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0813 19:53:23.937894 18094 handler.go:217] Removed *v1.Node event handler 10\\\\nI0813 19:53:23.937902 18094 handler.go:217] Removed *v1.Node event handler 2\\\\nI0813 19:53:23.937909 18094 handler.go:217] Removed *v1.EgressIP event handler 8\\\\nI0813 19:53:23.937915 18094 handler.go:217] Removed *v1.Pod event handler 3\\\\nI0813 19:53:23.937950 18094 reflector.go:295] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937977 18094 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:53:23.938001 18094 handler.go:203] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0813 19:53:23.938010 18094 handler.go:217] Removed *v1.EgressFirewall event handler 9\\\\nI0813 19:53:23.938033 18094 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:53:23.938059 18094 reflector.go:295] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:53:23.937476 18094 handler.go:217] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.866166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.903388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.913241 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.913510 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.913891 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.913906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914296 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.913918116 +0000 UTC m=+778.606583354 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914507 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.914483242 +0000 UTC m=+778.607148110 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.914705 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.915104 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.915323 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.915224 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.915430 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.914881 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.916339 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.915604414 +0000 UTC m=+778.608976492 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.916496 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.916479339 +0000 UTC m=+778.609144057 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.917014 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.916995534 +0000 UTC m=+778.609660332 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.918182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918297 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.918635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918753 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.918839 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.918764944 +0000 UTC m=+778.611429532 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919188 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919214 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919260 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919290 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919319 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919345 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919371 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919494 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919520 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919583 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919629 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919675 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919722 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.919748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920231 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920279 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920267767 +0000 UTC m=+778.612932585 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920301 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920292518 +0000 UTC m=+778.612957166 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920340 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920372 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.92036342 +0000 UTC m=+778.613028118 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920421 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920457 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920445242 +0000 UTC m=+778.613109930 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920498 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920531 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920520524 +0000 UTC m=+778.613185212 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920568 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920590326 +0000 UTC m=+778.613255134 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920648 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920681 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920671489 +0000 UTC m=+778.613336297 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920856 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920885 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920900 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.920952 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.920938726 +0000 UTC m=+778.613603424 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921011 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921047 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921036399 +0000 UTC m=+778.613701317 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921090 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921125 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921116101 +0000 UTC m=+778.613781009 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921173 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921208 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921198484 +0000 UTC m=+778.613863192 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921254 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921283 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921274646 +0000 UTC m=+778.613939364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921325 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921350078 +0000 UTC m=+778.614014756 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921408 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921443 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.9214332 +0000 UTC m=+778.614097868 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921489 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921519 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921510553 +0000 UTC m=+778.614175201 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921576 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921591 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921614996 +0000 UTC m=+778.614279684 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921668 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921699 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921689558 +0000 UTC m=+778.614354236 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921740 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921896 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921878233 +0000 UTC m=+778.614543341 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921959 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: E0813 19:54:49.921997 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:51.921987116 +0000 UTC m=+778.614651794 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.943583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:49 crc kubenswrapper[4183]: I0813 19:54:49.983439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:49Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015143 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015612 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/2.log" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015720 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" exitCode=1 Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015750 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791"} Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.015873 4183 scope.go:117] "RemoveContainer" containerID="8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.016392 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.017160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.022169 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.022569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023091 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023118 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023144 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.023367 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023753 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023926 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.023907234 +0000 UTC m=+778.716571882 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.023994 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024023 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024015577 +0000 UTC m=+778.716680215 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024065 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024090 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024083329 +0000 UTC m=+778.716747977 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024127 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024152 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024144361 +0000 UTC m=+778.716809119 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024182 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024200093 +0000 UTC m=+778.716864871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024247 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024274 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024266485 +0000 UTC m=+778.716931253 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024312 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024341 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024332367 +0000 UTC m=+778.716997005 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024374 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024398 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024391398 +0000 UTC m=+778.717056036 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024434 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.024480 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.024472751 +0000 UTC m=+778.717137389 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.112238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.125847 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.125927 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126026 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126101 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126134 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126181 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126215 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126247 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126280 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126304 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126223 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126349 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126327 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126303866 +0000 UTC m=+778.818968554 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126410 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126398399 +0000 UTC m=+778.819063047 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126428 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126420499 +0000 UTC m=+778.819085158 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126449 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12643953 +0000 UTC m=+778.819104238 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126540 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126636 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126736 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126749 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.126848 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126771 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126886 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126911 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126939 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126955 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126975 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126940 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126919774 +0000 UTC m=+778.819584572 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126990 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127007 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127009 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.126995766 +0000 UTC m=+778.819660434 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127029 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127041 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127048 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127059 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.126852 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127096 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127066728 +0000 UTC m=+778.819731406 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127109 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127134 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12712121 +0000 UTC m=+778.819785938 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127138 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12714568 +0000 UTC m=+778.819810368 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127189 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127167371 +0000 UTC m=+778.819832019 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127198732 +0000 UTC m=+778.819863380 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127214 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127224 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127216532 +0000 UTC m=+778.819881200 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127233 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127244 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127310 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127296184 +0000 UTC m=+778.819960883 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.127432 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127458 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127481 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127493 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127553 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127581 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127600 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127612 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127602 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127589303 +0000 UTC m=+778.820253991 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127737 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127871 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12785508 +0000 UTC m=+778.820519768 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.127921 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.127906822 +0000 UTC m=+778.820571530 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128135 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128231 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128267 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128268 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128326 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128312973 +0000 UTC m=+778.820977781 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128361 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128380 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128401 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128390626 +0000 UTC m=+778.821055364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128404 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128453 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128440887 +0000 UTC m=+778.821105575 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128473 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128495 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128509599 +0000 UTC m=+778.821174327 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128439 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128552 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128578 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128567541 +0000 UTC m=+778.821232239 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128578 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128600 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128611 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128649 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128637163 +0000 UTC m=+778.821301871 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128660 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128685 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128702 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128691084 +0000 UTC m=+778.821355782 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128709 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128728 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128732 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128740 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128771 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128852 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128611 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128856 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128836468 +0000 UTC m=+778.821502216 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128907 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.128917 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128878 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128922 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12890901 +0000 UTC m=+778.821573748 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128943 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128964 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128950722 +0000 UTC m=+778.821615360 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128984 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.128973962 +0000 UTC m=+778.821638640 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.128986 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129030 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129021774 +0000 UTC m=+778.821686462 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129048 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129060 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129105 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129149 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129190 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129173258 +0000 UTC m=+778.821837946 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129166 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129211 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129226 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129218 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129207419 +0000 UTC m=+778.821872057 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129234 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129152 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129267 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.12925417 +0000 UTC m=+778.821918868 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129307 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129294471 +0000 UTC m=+778.821959169 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129337 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129417 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129400375 +0000 UTC m=+778.822065193 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129486 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129533 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129521058 +0000 UTC m=+778.822185736 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129582 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129734 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129861 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129874 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129927 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.129908259 +0000 UTC m=+778.822573037 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.129961 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.129977 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.130024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.130039 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.130080 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.130068084 +0000 UTC m=+778.822732772 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132218 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132426 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132528 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132552 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132570 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132595 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132576685 +0000 UTC m=+778.825241403 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132658 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132645697 +0000 UTC m=+778.825310385 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132680 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132670098 +0000 UTC m=+778.825334786 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132703 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132711 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132690988 +0000 UTC m=+778.825355656 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132715 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.132721179 +0000 UTC m=+778.825385837 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132538 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132765 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.13274916 +0000 UTC m=+778.825413858 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132893 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132905 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132922 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132932 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.132953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.132982 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133023498 +0000 UTC m=+778.825688346 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133199 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133186533 +0000 UTC m=+778.825851161 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133273 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133231 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133217613 +0000 UTC m=+778.825882211 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133331537 +0000 UTC m=+778.825996135 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133506 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.133544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.133532552 +0000 UTC m=+778.826197360 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134406 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134493 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134541 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134706 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134926 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.134975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135019 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135329 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.135625 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.135688 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.135673293 +0000 UTC m=+778.828338082 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.135757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136185 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136355 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136355 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136473 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136551 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136564 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136574 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136577 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136841 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.136919 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137012 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137190 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137288 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137325 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137346 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137362 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137490 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137505 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137514 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137574 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.137551657 +0000 UTC m=+778.830216475 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.137617 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.137603969 +0000 UTC m=+778.830268597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.137980 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.138095 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.138165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138259 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138236237 +0000 UTC m=+778.830900985 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138281538 +0000 UTC m=+778.830946236 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138480444 +0000 UTC m=+778.831145132 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138518 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138507944 +0000 UTC m=+778.831172632 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138546 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138529035 +0000 UTC m=+778.831193863 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138568 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138556696 +0000 UTC m=+778.831221394 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138590 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138580156 +0000 UTC m=+778.831244844 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138614 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138599577 +0000 UTC m=+778.831264225 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138635 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138624268 +0000 UTC m=+778.831288946 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.138652 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138643238 +0000 UTC m=+778.831307906 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139058 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139079 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139089 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139163 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139222 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.138758162 +0000 UTC m=+778.831422820 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139243 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139276 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139258376 +0000 UTC m=+778.831923084 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139303 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139290797 +0000 UTC m=+778.831955465 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139333 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139314887 +0000 UTC m=+778.831979565 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139459 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139483 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139497 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.139662 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.139648807 +0000 UTC m=+778.832313505 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.153347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.168408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.186597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.209627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.210110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.234358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239388 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239413 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239425 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.239491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.239474365 +0000 UTC m=+778.932138983 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239238 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239860 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239921 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240043 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240054 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240077 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240068332 +0000 UTC m=+778.932732950 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240104 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240091983 +0000 UTC m=+778.932756601 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240154 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240179 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240193 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240238 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240224957 +0000 UTC m=+778.932889615 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.239976 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240274 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240291 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240313619 +0000 UTC m=+778.932978247 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.240356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240438 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240589 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.240579357 +0000 UTC m=+778.933243985 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.240702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240900 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240922 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.240934 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241118 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241135 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241147 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241060 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241160 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241145283 +0000 UTC m=+778.933809931 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241281 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241267677 +0000 UTC m=+778.933932355 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241309 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241361 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241434 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241498 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241507 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241518 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241529 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241531 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241541 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241569 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241559965 +0000 UTC m=+778.934224583 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241603 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241614 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.241633 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241636 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241628207 +0000 UTC m=+778.934292825 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241657 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241645127 +0000 UTC m=+778.934309775 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241680 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241690 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241698 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241717949 +0000 UTC m=+778.934382577 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.241873 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.241859883 +0000 UTC m=+778.934524631 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242030 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242106 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242134141 +0000 UTC m=+778.934798759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242175 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242151 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242225 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242217404 +0000 UTC m=+778.934882022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242245 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242271 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242264985 +0000 UTC m=+778.934929603 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242453 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242546 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242557 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242517562 +0000 UTC m=+778.935182150 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242670 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242679 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242697 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242689817 +0000 UTC m=+778.935354405 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242742 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242755 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242764 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242877 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242864322 +0000 UTC m=+778.935528950 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242907 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.242934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242943 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.242933234 +0000 UTC m=+778.935597852 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242985 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243097 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243108 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243142 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.24313154 +0000 UTC m=+778.935796278 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243184 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243207 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243200002 +0000 UTC m=+778.935864620 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.242683 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243223 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243245 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243238833 +0000 UTC m=+778.935903461 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243033 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243342 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243366 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243449 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243537 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243072 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243605 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243594013 +0000 UTC m=+778.936258691 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243629 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243660 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243653255 +0000 UTC m=+778.936317873 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243702 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243730 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243737 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.243727597 +0000 UTC m=+778.936392235 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243758 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.243849 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.243989 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244151 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.244291 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244388 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244400 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244409 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244437 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244429247 +0000 UTC m=+778.937093865 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244478 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244489 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244498 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244520 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244513969 +0000 UTC m=+778.937178597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244552 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244568421 +0000 UTC m=+778.937233039 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244614 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244629 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244636 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244661 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244653833 +0000 UTC m=+778.937318451 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244677 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244670084 +0000 UTC m=+778.937334682 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244714 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244724 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244731 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244754 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244747586 +0000 UTC m=+778.937412214 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244915 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244930 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244938 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.244966 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.244957402 +0000 UTC m=+778.937622020 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245319 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245311182 +0000 UTC m=+778.937975920 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245436 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245458 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245476 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245487 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245464 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245456736 +0000 UTC m=+778.938121364 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245544 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245532438 +0000 UTC m=+778.938197076 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.245563 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.245554229 +0000 UTC m=+778.938218907 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.266125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.302926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.341687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.345649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.346894 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.346993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347377 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347422 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347481 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.347464637 +0000 UTC m=+779.040129395 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347661 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347704 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347717 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.347747 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.347737474 +0000 UTC m=+779.040402172 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348006 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348048 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348059 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.348116 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:56:52.348105455 +0000 UTC m=+779.040770163 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.386081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.424660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.433886 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.434003 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.461591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: E0813 19:54:50.469877 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.504766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.544252 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.582352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.621119 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.664313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.705319 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.745699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.784401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.824759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.867726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.905702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.943268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:50 crc kubenswrapper[4183]: I0813 19:54:50.984275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:50Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.023422 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.031437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.066347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.102216 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.140548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.187704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.208931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.209954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.210951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.211907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.211970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.212036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.212367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.229056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.262028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.304446 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.346546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.383386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.425427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.434259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.434360 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.464237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.505191 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.544248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.596936 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.626083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.663548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.706377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.745141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.786217 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.822228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.862367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864260 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864275 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.864317 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.881356 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.887980 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888346 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888455 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888651 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.888855 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.905387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.909715 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915192 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915281 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915302 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915325 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.915351 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.930991 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935734 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935872 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935897 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935924 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.935952 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.945186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.951005 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956062 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956113 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956126 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956144 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.956164 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:54:51Z","lastTransitionTime":"2025-08-13T19:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.970348 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:51 crc kubenswrapper[4183]: E0813 19:54:51.970708 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:54:51 crc kubenswrapper[4183]: I0813 19:54:51.983754 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:51Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.022489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.064920 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.110390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.143981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.188057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.209871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:52 crc kubenswrapper[4183]: E0813 19:54:52.210335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.225447 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.265255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.397442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.420496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.433919 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.434441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.439879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.467628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.493539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.558874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.595904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.618063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.638364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.663620 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.704286 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.743501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.785609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.823055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.861640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.911888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.942851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:52 crc kubenswrapper[4183]: I0813 19:54:52.982912 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:52Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.022895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.066123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.104664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.142471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.182116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.209922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.209331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.210948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.211955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:53 crc kubenswrapper[4183]: E0813 19:54:53.212924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.225002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.260531 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.300846 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.349891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.400562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.439317 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.439448 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.474565 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.500498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.536388 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.557510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.650514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.678939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.705223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.749555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.769203 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.790200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.821865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.932385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.952008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:53 crc kubenswrapper[4183]: I0813 19:54:53.968266 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208613 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.208585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.209647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.209871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:54 crc kubenswrapper[4183]: E0813 19:54:54.210757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.433355 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.433447 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674290 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674438 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674487 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674523 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:54:54 crc kubenswrapper[4183]: I0813 19:54:54.674544 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208933 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.208934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.209952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.210997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.211544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.213260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.213704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.213940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.214184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.214533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.229499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.244422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.260315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.275985 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.293209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.312525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.333056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.348308 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.364608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.381594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.398950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.417978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.432617 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.432738 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.445479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.461423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: E0813 19:54:55.471930 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.477542 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.496230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.526141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.541361 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.559281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.576610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.590130 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.606901 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.620106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.658302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.672222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.690672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.733199 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.764121 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.786088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.803209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.821187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.839703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.855249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.873204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.891683 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.912246 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.929234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.948525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:55 crc kubenswrapper[4183]: I0813 19:54:55.975739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.003162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.019617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.037213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.054994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.070301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.095497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.114927 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.129125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.144330 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.168599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.193386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.209987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210209 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:56 crc kubenswrapper[4183]: E0813 19:54:56.210609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.210727 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.227086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.251561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.269053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.286215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.300984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.318296 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.335696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e3516e0a712925c3b7d64d813b047e984d53ef7ce13569fc512e097283e61eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:53:39Z\\\",\\\"message\\\":\\\"2025-08-13T19:52:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e\\\\n2025-08-13T19:52:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6d4c7a4b-992a-468c-8ecf-65018a2ecb5e to /host/opt/cni/bin/\\\\n2025-08-13T19:52:54Z [verbose] multus-daemon started\\\\n2025-08-13T19:52:54Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:53:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:52:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.352451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.370885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.386214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.421858 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.432623 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.432714 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.464577 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.504704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.546354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.583345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:56 crc kubenswrapper[4183]: I0813 19:54:56.622537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:54:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.209976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.209880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210331 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.210895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.211726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.211873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.212927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.213063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213159 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.213991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.214943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:57 crc kubenswrapper[4183]: E0813 19:54:57.215024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.433142 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:57 crc kubenswrapper[4183]: I0813 19:54:57.433391 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.210286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.210651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.210428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:54:58 crc kubenswrapper[4183]: E0813 19:54:58.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.433957 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:58 crc kubenswrapper[4183]: I0813 19:54:58.434101 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.209561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.210979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.211969 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.211982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.212943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.213674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.213972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.214396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.214921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215147 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.215909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.215955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.216890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.217036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:54:59 crc kubenswrapper[4183]: E0813 19:54:59.217144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.432859 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:54:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:54:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:54:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:54:59 crc kubenswrapper[4183]: I0813 19:54:59.432994 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.209734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.210873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.211643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.211921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.212611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.212679 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.432359 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:00 crc kubenswrapper[4183]: I0813 19:55:00.432441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:00 crc kubenswrapper[4183]: E0813 19:55:00.473569 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.209436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.209914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.210991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.211739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.212975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.213599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.213934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214592 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.214664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.214904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:01 crc kubenswrapper[4183]: E0813 19:55:01.215425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.432877 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:01 crc kubenswrapper[4183]: I0813 19:55:01.432969 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059343 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059448 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059466 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059721 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.059752 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.075262 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080759 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080880 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080898 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080919 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.080940 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.098527 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106482 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106586 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106611 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106648 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.106692 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.125104 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131575 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131598 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.131666 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.149335 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156171 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156266 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156291 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156315 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.156354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:02Z","lastTransitionTime":"2025-08-13T19:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.184532 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:02Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.184600 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.208552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.208326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.208886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:02 crc kubenswrapper[4183]: E0813 19:55:02.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.432679 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:02 crc kubenswrapper[4183]: I0813 19:55:02.432963 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.208920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.208426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.209610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.211393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.211760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.212316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.212942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.213749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.213903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.214518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.214966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.215955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.216238 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:03 crc kubenswrapper[4183]: E0813 19:55:03.216722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.232063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.247610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.265721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.283497 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.302981 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.323294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.350613 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.366223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.382297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.397395 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.417904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.432142 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.432243 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.434398 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.454439 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.471231 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.488719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.555332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.572157 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.588757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.603857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.621247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.637302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.653326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.672145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.688405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.705952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.726645 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.742215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.760405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.780920 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.799180 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.814006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.830004 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.846385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.864209 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.901163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.919360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.935888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.952267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.974215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:03 crc kubenswrapper[4183]: I0813 19:55:03.989706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:03Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.005349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.021434 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.036710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.055051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.071295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.100228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.116890 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.131052 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.145063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.163065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.178068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.192517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.209528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.209966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.210082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:04 crc kubenswrapper[4183]: E0813 19:55:04.210164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.211280 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.227991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.242520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.256518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.274705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.294055 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.311960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.328612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.346929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.364604 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.380202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.397644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.413517 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.429648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.433177 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.433363 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:04 crc kubenswrapper[4183]: I0813 19:55:04.446724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.209944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.211884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.212810 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.212923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.213956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.214007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.215928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.216265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.231175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.250362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.265662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.293002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.319971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.355288 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.373403 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.393258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.418137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.434027 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.434105 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.436396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.451638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: E0813 19:55:05.475868 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.480125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.498926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.511933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.528649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.548498 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.565532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.582098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.604239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.624152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.646558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.664405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.687586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.708194 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.730101 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.747442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.766012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.782908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.800496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.823873 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.847081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.862973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.880015 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.899089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.917710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.935229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.953451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.970115 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:05 crc kubenswrapper[4183]: I0813 19:55:05.988675 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.004078 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.029213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.044491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.058377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.075702 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.095342 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.111663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.125131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.141162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.157732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.171548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.185550 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.200047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209545 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.209923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.209978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:06 crc kubenswrapper[4183]: E0813 19:55:06.210412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.218723 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.236574 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.255971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.272664 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.289373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.304926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.331647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.351437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.368268 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.389485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.428467 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.432070 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.432476 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.467640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.508950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.547345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:06 crc kubenswrapper[4183]: I0813 19:55:06.596599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.209609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210956 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.211947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.210885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.213893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215853 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.215889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.215932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.214184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.216598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.216591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.216605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.217582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.217911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:07 crc kubenswrapper[4183]: E0813 19:55:07.218216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.432260 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:07 crc kubenswrapper[4183]: I0813 19:55:07.432345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.209612 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:08 crc kubenswrapper[4183]: E0813 19:55:08.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.432578 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:08 crc kubenswrapper[4183]: I0813 19:55:08.432676 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.209718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.210734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.211199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.211419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.211725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.212157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.212455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.212585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.213010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.213195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.213667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.214177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.214343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.214713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.215707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.215745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.216547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.216627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.215534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.217629 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.217753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.218469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.218925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.218587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.220244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.219650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.219957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221127 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.221322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.220928 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221772 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.221976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.222533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:09 crc kubenswrapper[4183]: E0813 19:55:09.223622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.433538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:09 crc kubenswrapper[4183]: I0813 19:55:09.433685 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.211662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211854 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.211987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.212061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.212413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.432199 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:10 crc kubenswrapper[4183]: I0813 19:55:10.432323 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:10 crc kubenswrapper[4183]: E0813 19:55:10.477003 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.100349 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.208991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.209920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.210643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.211891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.212620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.212896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.213028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.213921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.214965 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:11 crc kubenswrapper[4183]: E0813 19:55:11.215468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.432630 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:11 crc kubenswrapper[4183]: I0813 19:55:11.432725 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.208740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.208502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.208996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.209948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312336 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312499 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312548 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312627 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.312682 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.336305 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342032 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342117 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342139 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342165 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.342196 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.356905 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362189 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362509 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.362747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.363118 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.363453 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.382015 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.388729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389275 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389477 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.389883 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.390274 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.405680 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413360 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413707 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.413911 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.414047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.414164 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:12Z","lastTransitionTime":"2025-08-13T19:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.429545 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:12Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:12 crc kubenswrapper[4183]: E0813 19:55:12.431016 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.431599 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:12 crc kubenswrapper[4183]: I0813 19:55:12.431959 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.208456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.209962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210513 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.210877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.211991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.212178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.212691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.212974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.213267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.213433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.213964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:13 crc kubenswrapper[4183]: E0813 19:55:13.214697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.433551 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:13 crc kubenswrapper[4183]: I0813 19:55:13.434499 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208428 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.208664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:14 crc kubenswrapper[4183]: E0813 19:55:14.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.432945 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:14 crc kubenswrapper[4183]: I0813 19:55:14.433072 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.208732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.208904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.208965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209489 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210709 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.210119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.211578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.212878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.212901 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.213731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.213770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.214064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.271578 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.288009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.314696 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.339255 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.362935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.379905 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.396993 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.410210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.425332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.431379 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.431510 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.441597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.456529 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.473162 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: E0813 19:55:15.478088 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.491167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.518463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.536989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.550933 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.566937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.585086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.598515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.618050 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.638710 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.657374 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.675866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.694321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.715765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.734584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.756474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.776639 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.800921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.823967 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.843718 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.864204 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.884223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.905011 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.934225 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.954983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.971916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:15 crc kubenswrapper[4183]: I0813 19:55:15.988501 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.003422 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.023573 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.043104 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.059921 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.079408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.097012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.119124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.144412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.161658 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.177352 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.194116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.209866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.209993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:16 crc kubenswrapper[4183]: E0813 19:55:16.210910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.214635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.238417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.256768 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.279037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.299457 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.315298 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.332228 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.350105 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.367126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.384980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.401862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.416449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.432643 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.432737 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.434215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.452651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.471594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.492431 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.511474 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:16 crc kubenswrapper[4183]: I0813 19:55:16.532210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.208995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.209752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.209985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.209988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.211041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.213629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.214345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.214639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.215655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.216645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.216911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.217886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.218938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.220934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.221629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:17 crc kubenswrapper[4183]: E0813 19:55:17.222623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.434637 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:17 crc kubenswrapper[4183]: I0813 19:55:17.434881 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.208717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.208958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:18 crc kubenswrapper[4183]: E0813 19:55:18.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.432644 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:18 crc kubenswrapper[4183]: I0813 19:55:18.432873 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.209583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210217 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.210942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.210992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.211748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212694 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.212851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.213969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.214023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.213979 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.214384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.214925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215377 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.215608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.215950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.216116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:19 crc kubenswrapper[4183]: E0813 19:55:19.216215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.432379 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:19 crc kubenswrapper[4183]: I0813 19:55:19.432546 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.434604 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:20 crc kubenswrapper[4183]: I0813 19:55:20.434755 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:20 crc kubenswrapper[4183]: E0813 19:55:20.480008 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.211410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.212321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.212768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.213032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.213362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.213603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.213999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215026 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.215900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.215979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.216171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.216082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.217908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.217635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.218758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.218927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.219895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.219986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.220123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.220143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.220968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.221326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.223100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:21 crc kubenswrapper[4183]: E0813 19:55:21.223369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.433308 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:21 crc kubenswrapper[4183]: I0813 19:55:21.433915 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.208591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.208922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.209128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.209227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.210949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.433042 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.433731 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711431 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711498 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711516 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711536 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.711557 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.727956 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.733942 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.734119 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.734235 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.750310 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756271 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756292 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756318 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.756354 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.775457 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781703 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.781761 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.799995 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806387 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806411 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806435 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:22 crc kubenswrapper[4183]: I0813 19:55:22.806472 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:22Z","lastTransitionTime":"2025-08-13T19:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.825055 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:22Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:22 crc kubenswrapper[4183]: E0813 19:55:22.825143 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.209905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.210938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211206 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.211730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.211981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212044 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.212840 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.212931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.213888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.214209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.214443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.214895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215608 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.215754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:23 crc kubenswrapper[4183]: E0813 19:55:23.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.433767 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:23 crc kubenswrapper[4183]: I0813 19:55:23.433988 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.209903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:24 crc kubenswrapper[4183]: E0813 19:55:24.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.433126 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:24 crc kubenswrapper[4183]: I0813 19:55:24.433531 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.208657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.208964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.208300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.209864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.209994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.210765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211305 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211634 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.212589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.213702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.214381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.214901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.215220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.215414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.228412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.246437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.265169 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.283258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.316577 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.338447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.355035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.376554 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.393883 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.410875 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.427722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.432051 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.432186 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.447463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.464002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.478635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: E0813 19:55:25.481638 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.500612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.525681 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.545535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.561605 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.577200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.596899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.617866 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.633453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.650299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.668548 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:54:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.686041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.701069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.717958 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.734020 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.750552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.767982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.785708 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.808717 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.829407 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.847739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.868006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.889086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.909030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.925576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.945757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.964566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:25 crc kubenswrapper[4183]: I0813 19:55:25.984769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.001131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.021317 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.039002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.055656 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.073092 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.090569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.110707 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.128750 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.146329 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.165602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.180669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.196983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.208681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.208974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.209380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.209857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.210498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.210918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.211061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.211409 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:26 crc kubenswrapper[4183]: E0813 19:55:26.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.217974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.234638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.248240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.266141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.279599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.301294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.320499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.340955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.358432 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.376079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.395497 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.412336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.428665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.433750 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.433963 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:26 crc kubenswrapper[4183]: I0813 19:55:26.448253 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.209938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208962 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210477 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.210987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.211093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209089 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.212871 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.213950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209268 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.214910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.208873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.215985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.216491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.217910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:27 crc kubenswrapper[4183]: E0813 19:55:27.218028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.433219 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:27 crc kubenswrapper[4183]: I0813 19:55:27.433345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.209309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.209379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.210330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:28 crc kubenswrapper[4183]: E0813 19:55:28.210641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.432879 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:28 crc kubenswrapper[4183]: I0813 19:55:28.432997 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.210922 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.215981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211659 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.216991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.211766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.219855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.220154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:29 crc kubenswrapper[4183]: E0813 19:55:29.218705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.433518 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:29 crc kubenswrapper[4183]: I0813 19:55:29.434314 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.172465 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.172591 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f"} Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.189584 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.209606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210150 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210405 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.210631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.212703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.229473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.252125 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.273569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.300068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.316046 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.333293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.352597 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.372152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.390719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.407591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.425401 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432760 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:55:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:55:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:55:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432946 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.432999 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.434179 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.434265 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" gracePeriod=3600 Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.443916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.462295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.482482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: E0813 19:55:30.484139 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.495955 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.509494 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.527689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.542402 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.558071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.574222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.596126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.617001 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.637687 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.658970 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.677549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.698057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.714660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.733913 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.748767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.764003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.782511 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.807091 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.828143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.846845 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.863386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.880347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.897313 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.913133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.937187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.954651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.970318 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:30 crc kubenswrapper[4183]: I0813 19:55:30.991030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.007065 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.022281 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.038561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.052525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.078614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.098370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.114293 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.133737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.153227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.170938 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.188355 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.205023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.208984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.208985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209164 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209610 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.209883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210836 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210856 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.210950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.210982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.211159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.211230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.212660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.212951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:31 crc kubenswrapper[4183]: E0813 19:55:31.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.225973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.243672 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.262906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.278759 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.300198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.323760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.344152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.363258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.390700 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.413277 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:31 crc kubenswrapper[4183]: I0813 19:55:31.433133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:31Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.208963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.208997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:32 crc kubenswrapper[4183]: I0813 19:55:32.209903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.209992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.210200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:32 crc kubenswrapper[4183]: E0813 19:55:32.210395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011591 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011920 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011944 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.011966 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.012000 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.031264 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037199 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037444 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037566 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.037889 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.060851 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.065963 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066043 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066066 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.066116 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.087550 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093403 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093486 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093500 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093520 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.093540 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.107668 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113186 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113262 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.113313 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:33Z","lastTransitionTime":"2025-08-13T19:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.128925 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:33Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.128988 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.208687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.209931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.209938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.210664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.210996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.211751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.212939 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.212945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.213221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.213670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.214517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:33 crc kubenswrapper[4183]: I0813 19:55:33.215581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.215977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:33 crc kubenswrapper[4183]: E0813 19:55:33.216711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208864 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:34 crc kubenswrapper[4183]: I0813 19:55:34.209655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.209759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210345 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:34 crc kubenswrapper[4183]: E0813 19:55:34.210610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.208737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.211877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212505 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.213095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.208489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.213240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.214954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215111 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.210195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.210237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.229598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.247083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.271492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.287404 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.304102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.324512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.346598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.363728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.386112 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.404159 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.419047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.438455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.457017 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.473934 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: E0813 19:55:35.485910 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.503259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.536591 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.577260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.606515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.625880 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.645143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.665059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.684499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.702131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.718974 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.738712 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.755947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.775367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.788612 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.805972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.821347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.846705 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.862544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.878903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.903665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.918377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.939758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.959140 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.977195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:35 crc kubenswrapper[4183]: I0813 19:55:35.996896 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.013973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.030155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.048141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.064376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.092633 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.111030 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.135614 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.151971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.166049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.179345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.192850 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.208522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.208753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.209929 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.210046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:36 crc kubenswrapper[4183]: E0813 19:55:36.210113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.210661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.226267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.240867 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.260632 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.288667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.302312 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.316731 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.330416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.348769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.367972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.382453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.398560 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.412950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.427473 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.441537 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.459557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:36 crc kubenswrapper[4183]: I0813 19:55:36.480479 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209239 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209333 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.209877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.209954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210294 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.210712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.211575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.211933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.212694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.212897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.213874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:37 crc kubenswrapper[4183]: I0813 19:55:37.213920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.214945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215657 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:37 crc kubenswrapper[4183]: E0813 19:55:37.215772 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.208993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209291 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.209722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:38 crc kubenswrapper[4183]: I0813 19:55:38.212039 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:38 crc kubenswrapper[4183]: E0813 19:55:38.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.208900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.209901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.209953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.210767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.210224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.211725 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.211938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.212046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.212982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:39 crc kubenswrapper[4183]: I0813 19:55:39.213295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:39 crc kubenswrapper[4183]: E0813 19:55:39.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.208693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:40 crc kubenswrapper[4183]: I0813 19:55:40.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:40 crc kubenswrapper[4183]: E0813 19:55:40.487965 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.209372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.209948 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210759 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.210995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212709 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.212895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.212956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.213038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:41 crc kubenswrapper[4183]: I0813 19:55:41.213325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.213933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.214161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.215911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.216887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217043 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.217744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.218971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:41 crc kubenswrapper[4183]: E0813 19:55:41.219169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:42 crc kubenswrapper[4183]: I0813 19:55:42.209211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.209483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:42 crc kubenswrapper[4183]: E0813 19:55:42.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.212882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.214498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.214927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213389 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213756 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.212912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.214869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.213315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.216288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.216928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.216947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.217083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.217571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.217958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.218897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.219057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525399 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525463 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525481 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525502 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.525527 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.545583 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549591 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549672 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.549889 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.550257 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.563961 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.567932 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568007 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568079 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568152 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.568178 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.581710 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586657 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586755 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586897 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586931 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.586967 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.603058 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.607984 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608023 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608034 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608053 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:43 crc kubenswrapper[4183]: I0813 19:55:43.608073 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:43Z","lastTransitionTime":"2025-08-13T19:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.627194 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:43Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:43 crc kubenswrapper[4183]: E0813 19:55:43.627269 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.208710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.209866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:44 crc kubenswrapper[4183]: I0813 19:55:44.210209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:44 crc kubenswrapper[4183]: E0813 19:55:44.210494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.209525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208981 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209180 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209201 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.209945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.210704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.211203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.211323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.213333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.214963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.215988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.216197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.216344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.217096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.217899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.218936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.219936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.220203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.225358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.224982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.234598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.252326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.274895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.293900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.311923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.325454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.344118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.364002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.383908 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.404294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.423966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.439324 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.454437 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.475139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: E0813 19:55:45.489594 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.491009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.508589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.528109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.546926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.563106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.579428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.597344 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.613067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.627648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.645576 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.663074 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.679884 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.694595 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.713364 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.730897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.744865 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.761743 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.779617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.799458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.820684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.839895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.856965 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.881500 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.902081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.926453 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.949887 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.971894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:45 crc kubenswrapper[4183]: I0813 19:55:45.989187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.013545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.036552 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.055414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.081184 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.101311 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.119677 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.138006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.158192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.180428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.207305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.208859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:46 crc kubenswrapper[4183]: E0813 19:55:46.209909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.227698 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.249455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.274009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.294673 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.324302 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.340935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.359362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.378906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.402173 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.423390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.442699 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.465405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.495478 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.515068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:46 crc kubenswrapper[4183]: I0813 19:55:46.531978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.209742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.209911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210072 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210423 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.210744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.210913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.211734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212228 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212519 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.212952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.212973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213399 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.213986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214341 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.214542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.214954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.215299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:47 crc kubenswrapper[4183]: I0813 19:55:47.215359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.215647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.216597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.216771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:47 crc kubenswrapper[4183]: E0813 19:55:47.217018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208849 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208884 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:48 crc kubenswrapper[4183]: I0813 19:55:48.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.209984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.210183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:48 crc kubenswrapper[4183]: E0813 19:55:48.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209419 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.209942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210001 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210121 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.212077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.210963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211079 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211446 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:49 crc kubenswrapper[4183]: I0813 19:55:49.211910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212063 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212630 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.212915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:49 crc kubenswrapper[4183]: E0813 19:55:49.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.208946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.209422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.209882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.210711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:50 crc kubenswrapper[4183]: I0813 19:55:50.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:50 crc kubenswrapper[4183]: E0813 19:55:50.491999 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208938 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208978 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.209942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.209953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210523 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211174 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.211595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.211923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.212989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:51 crc kubenswrapper[4183]: I0813 19:55:51.213599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.213985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:51 crc kubenswrapper[4183]: E0813 19:55:51.214913 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.208949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.208979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:52 crc kubenswrapper[4183]: I0813 19:55:52.209411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:52 crc kubenswrapper[4183]: E0813 19:55:52.209916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208381 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.208574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.208753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.208989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.209947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210015 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.210577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.210905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211367 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.211768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.211947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.212264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.212583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.213741 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214340 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.215640 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.215920 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.216446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764365 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764412 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764428 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764459 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.764483 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.783246 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.791630 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792179 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792451 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792580 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.792731 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.811048 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.817735 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.817922 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818121 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818278 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.818402 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.849442 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.857971 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858316 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858459 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858629 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.858764 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.887074 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894340 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894603 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894719 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.894963 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:55:53 crc kubenswrapper[4183]: I0813 19:55:53.895304 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:55:53Z","lastTransitionTime":"2025-08-13T19:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.913122 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:53Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:53 crc kubenswrapper[4183]: E0813 19:55:53.913189 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.208860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209139 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.209996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.210073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.210484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:54 crc kubenswrapper[4183]: E0813 19:55:54.210768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675270 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675642 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675762 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.675990 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:55:54 crc kubenswrapper[4183]: I0813 19:55:54.676105 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.208569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209722 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209684 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.209867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.209908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210162 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210229 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210504 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.210974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.211959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.211994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.212048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212315 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.212594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.212731 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.213583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.213873 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.214456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.227942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.244287 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.264242 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.285094 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.301999 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.318688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.335131 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.360851 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.377755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.397198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.422138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.441441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.458117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.479031 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: E0813 19:55:55.493683 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.501282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.522960 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.548171 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.562761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.577018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.593049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.614034 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.634136 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.651295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.668755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.685663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.703587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.718571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.736345 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.752665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.767003 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.781137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.797148 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.815248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.836444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.852126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.868337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.883350 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.902739 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.922450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.937512 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.953555 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.968726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:55 crc kubenswrapper[4183]: I0813 19:55:55.987218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.003406 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.019021 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.035238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.055083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.067732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.095583 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.113874 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.137709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.162697 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.185519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.205393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208712 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208734 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.208758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.208966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.209649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.210274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:56 crc kubenswrapper[4183]: E0813 19:55:56.210563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.228200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.245220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.266423 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.285124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.305767 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.322163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.340535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.355586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.370258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.385888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.408444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.426946 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:56 crc kubenswrapper[4183]: I0813 19:55:56.442190 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:55:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209859 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.209863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.209980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.210863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210871 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.210937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.211955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212844 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.212923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213449 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213515 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:57 crc kubenswrapper[4183]: I0813 19:55:57.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:55:57 crc kubenswrapper[4183]: E0813 19:55:57.214593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.209265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:55:58 crc kubenswrapper[4183]: I0813 19:55:58.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.211052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:55:58 crc kubenswrapper[4183]: E0813 19:55:58.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208589 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208749 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209328 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209549 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.209939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.209995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.210707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.210908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.211964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.211976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.212442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.212448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:55:59 crc kubenswrapper[4183]: I0813 19:55:59.213395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.213892 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214739 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214847 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:55:59 crc kubenswrapper[4183]: E0813 19:55:59.214889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.208527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.208869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.209259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.209591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.210318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.210575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:00 crc kubenswrapper[4183]: I0813 19:56:00.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.211335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:00 crc kubenswrapper[4183]: E0813 19:56:00.495437 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.209891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.209932 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211014 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.211592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212235 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.212413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.212972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213335 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.213604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213674 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:01 crc kubenswrapper[4183]: I0813 19:56:01.214538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.214973 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215248 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:01 crc kubenswrapper[4183]: E0813 19:56:01.215516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:02 crc kubenswrapper[4183]: I0813 19:56:02.209411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.209934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:02 crc kubenswrapper[4183]: E0813 19:56:02.210332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.208904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209004 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209066 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.209708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.209872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.210843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.210917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211649 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.211887 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.211923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212069 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.212908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:03 crc kubenswrapper[4183]: I0813 19:56:03.213653 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:03 crc kubenswrapper[4183]: E0813 19:56:03.214389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199875 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199970 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.199992 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.200018 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.200053 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209204 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.209494 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.209734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.210765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.211488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.211717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.212487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.212649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.215692 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220665 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220712 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.220843 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.235272 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239285 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239362 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239383 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.239446 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.253328 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258665 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258733 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.258752 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.259075 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.259163 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.276034 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282206 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282220 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282242 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:04 crc kubenswrapper[4183]: I0813 19:56:04.282262 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:04Z","lastTransitionTime":"2025-08-13T19:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.297033 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:04Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:04 crc kubenswrapper[4183]: E0813 19:56:04.297166 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.209933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.210940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208774 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.211396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209008 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.213938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209064 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.214708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209155 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.215953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.216746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.208439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.217234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.217976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.229952 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.247903 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.264720 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.282737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.300145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.320640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.338568 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.356176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.373593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.387611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.403395 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.419729 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.442408 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.461205 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.480482 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.495992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: E0813 19:56:05.497511 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.512671 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.528541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.551161 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.565525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.582667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.597932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.616593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.632259 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.649354 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.664988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.691337 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.711227 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.730971 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.750932 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.770151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.791679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.815346 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.836715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.854622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.875412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.902980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.920983 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.943894 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.963982 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:05 crc kubenswrapper[4183]: I0813 19:56:05.986722 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.009168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.025738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.049182 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.076325 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.098116 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.115644 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.136405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.158495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.178972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.200008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.208858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.208571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.210166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:06 crc kubenswrapper[4183]: E0813 19:56:06.210633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.220044 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.240662 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.259263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.278110 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.295642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.315056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.332747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.353472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.369451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.385669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.403059 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.419689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.438641 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.460732 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.480764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:06 crc kubenswrapper[4183]: I0813 19:56:06.500187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.209914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.209984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210541 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.210701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210815 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.210957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211029 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.211499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.211625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212190 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212680 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.212877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.212880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213014 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213457 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:07 crc kubenswrapper[4183]: I0813 19:56:07.213512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.213861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:07 crc kubenswrapper[4183]: E0813 19:56:07.214360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.208690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.208746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209504 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209013 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.209956 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.210639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:08 crc kubenswrapper[4183]: I0813 19:56:08.211077 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:08 crc kubenswrapper[4183]: E0813 19:56:08.212000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210366 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.210706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.211411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.211978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.212931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.212966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213398 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.213861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:09 crc kubenswrapper[4183]: I0813 19:56:09.213939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.214439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.215061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:09 crc kubenswrapper[4183]: E0813 19:56:09.216261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.143478 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.143573 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.208542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.208895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:10 crc kubenswrapper[4183]: I0813 19:56:10.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.209998 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:10 crc kubenswrapper[4183]: E0813 19:56:10.499170 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209451 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209941 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.210918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.210976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211696 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.211898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.211992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212652 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:11 crc kubenswrapper[4183]: I0813 19:56:11.213054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.213924 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214027 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:11 crc kubenswrapper[4183]: E0813 19:56:11.214483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.208911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209049 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:12 crc kubenswrapper[4183]: I0813 19:56:12.209134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.209705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.210882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:12 crc kubenswrapper[4183]: E0813 19:56:12.211068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209367 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209383 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.209910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210133 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.210666 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210549 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.210876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.211556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.212946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.213965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:13 crc kubenswrapper[4183]: I0813 19:56:13.214055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:13 crc kubenswrapper[4183]: E0813 19:56:13.214636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.208653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.208449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.209757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.210386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512226 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512237 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512278 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.512299 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.526050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531306 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531393 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531414 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531435 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.531464 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.545560 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.550937 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551025 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.551074 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.564534 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.568959 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569035 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569052 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569073 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.569093 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.588623 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.594962 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595040 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595057 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595078 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:14 crc kubenswrapper[4183]: I0813 19:56:14.595100 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:14Z","lastTransitionTime":"2025-08-13T19:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.608550 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:14Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:14 crc kubenswrapper[4183]: E0813 19:56:14.608622 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.208417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209636 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.209720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209989 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.209665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210768 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.210843 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.210931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211226 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211382 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.211583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211388 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.211309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212327 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.212593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212593 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.212918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.213997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214705 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.214949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215170 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.215373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.227187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.243581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.259957 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.277879 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.294626 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.311998 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.334503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.350299 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351259 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/3.log" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351508 4183 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" exitCode=1 Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.351608 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f"} Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.352652 4183 scope.go:117] "RemoveContainer" containerID="c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.352885 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.353509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.357679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.378410 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.399024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.414248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.434201 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.449765 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.464647 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.489757 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: E0813 19:56:15.500460 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.506362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.524239 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.541269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.557247 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.572415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.588387 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.604962 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.620897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.638676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.655024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.670020 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.685454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.699234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.721493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.737484 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.760495 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.780416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.801081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.816886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.836202 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.858273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.875002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.894663 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.926236 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.945697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:15 crc kubenswrapper[4183]: I0813 19:56:15.969096 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.003642 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:15Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.020976 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.047977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.068544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.093756 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.108855 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.124349 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.149236 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.165761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.184978 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.202232 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209056 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.209769 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.211754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:16 crc kubenswrapper[4183]: E0813 19:56:16.212072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.222499 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.239376 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.269516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.286118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.304628 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.320653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.339570 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.356069 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.364234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.382370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.399208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.415684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:55:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.432113 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.445471 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.462009 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.483643 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.503598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.522071 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.539124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.558173 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.576366 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.593889 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.611320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.631165 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.665153 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.698930 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.715747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.734008 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.751758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.771091 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.788994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.806566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.836602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.854684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.878263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.904117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.919747 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.940516 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.959600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.978959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:16 crc kubenswrapper[4183]: I0813 19:56:16.993682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:16Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.008120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.033367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.047860 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.063536 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.105051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.143381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.180872 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.208575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208671 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208717 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.208942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.208945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.209895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.209934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210302 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.210927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.211113 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.211227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.212143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.212508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.213879 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.214385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.215172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.215868 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.216106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.216240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.216711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.217499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.217851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.218528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.219003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:17 crc kubenswrapper[4183]: E0813 19:56:17.219961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.228751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.261428 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.302559 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.343270 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364224 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" exitCode=0 Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364294 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02"} Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364331 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.364361 4183 scope.go:117] "RemoveContainer" containerID="0013e44de74322309425667dbf9912f966d38a2d7bfb94bb8f87819624687839" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.383378 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.421610 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.429562 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.433483 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.433580 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.460251 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.504343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.549174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.580721 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.625069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.662510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.702944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.744126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.783212 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.819603 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.860931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.903214 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.940803 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:17 crc kubenswrapper[4183]: I0813 19:56:17.982282 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:17Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.026320 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.063229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.101991 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.143582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.183959 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.208994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.209353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.209714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:18 crc kubenswrapper[4183]: E0813 19:56:18.210240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.226485 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.269575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.320088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.352706 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.389781 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.423420 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.432088 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.432205 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.463661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.499939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.543935 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.582775 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.623942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.663709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.702396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.742640 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.780916 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.825602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.872737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.903669 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.944095 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:18 crc kubenswrapper[4183]: I0813 19:56:18.981653 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:18Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.026010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.062359 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.101305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.144546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.181393 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.209937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.209455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.210762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.211747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.211909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212770 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.212917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.212875 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.213769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.213954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214169 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.214965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215798 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.215926 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:19 crc kubenswrapper[4183]: E0813 19:56:19.216258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.219556 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.264480 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.302417 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.342617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.384648 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.424299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.432424 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.432912 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.462156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.505269 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.545297 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.673785 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.706049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.727940 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.752038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.769963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.832900 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.849550 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.876249 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.906455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.943305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:19 crc kubenswrapper[4183]: I0813 19:56:19.983367 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:19Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.023068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.063089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.100118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.141356 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.181972 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210465 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.210492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.210955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.211989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.213946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.213966 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.215044 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.227465 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.265598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.339301 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.354486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.386384 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.430137 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.432887 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.432975 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.464674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.496082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: E0813 19:56:20.502134 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.519051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.546187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.593213 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.624098 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.660919 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.699445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.744033 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.784885 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.821728 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.862049 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.900037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.941619 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:20 crc kubenswrapper[4183]: I0813 19:56:20.985322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:20Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.023563 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.064307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.106073 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.144427 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.183199 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208930 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209020 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.209905 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.210881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.210886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211621 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211780 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.211977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.211997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212283 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.212746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213862 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.213994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214056 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:21 crc kubenswrapper[4183]: E0813 19:56:21.214198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.224279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.262956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:21Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.432690 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:21 crc kubenswrapper[4183]: I0813 19:56:21.432951 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209548 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209617 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.209630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210093 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.210995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.211166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:22 crc kubenswrapper[4183]: E0813 19:56:22.211317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.432667 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:22 crc kubenswrapper[4183]: I0813 19:56:22.432776 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.208984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.209420 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.209751 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210617 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.210872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211206 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.211271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.211606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212659 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.212933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.212957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.213728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213784 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214085 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.214894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.214993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215075 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215185 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.215546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:23 crc kubenswrapper[4183]: E0813 19:56:23.216570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.432949 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:23 crc kubenswrapper[4183]: I0813 19:56:23.433115 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.208442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.208743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.209977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.433884 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.434077 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700048 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700153 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700178 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.700251 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.716426 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723122 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723215 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723295 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723329 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.723370 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.739930 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745076 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745146 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745162 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745185 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.745232 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.759840 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765005 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765054 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765075 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765100 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.765126 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.779063 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784634 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784675 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784688 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784710 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:24 crc kubenswrapper[4183]: I0813 19:56:24.784737 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:24Z","lastTransitionTime":"2025-08-13T19:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.798616 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:24Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:24 crc kubenswrapper[4183]: E0813 19:56:24.798684 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.208335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.208952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209383 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.209967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210419 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210603 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.210662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210766 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.210915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.211467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.211561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.212669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.212786 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213301 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213482 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.213544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213908 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.213997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214104 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.214321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.215348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.230450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.247405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.268587 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.286304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.303362 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.317926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.335582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.350646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.373454 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.390222 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.408072 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.425205 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.431509 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.431603 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.441396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.455680 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.473899 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.491360 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: E0813 19:56:25.503760 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.509424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.533014 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.548674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.564661 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.580489 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.601151 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.623561 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.641068 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.656056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.673617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.690458 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.701915 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.714892 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.732566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.751595 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.768521 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.785310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.802176 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.818491 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.838598 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.854870 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.882463 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.902023 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.920979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.938464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.955760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.973037 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:25 crc kubenswrapper[4183]: I0813 19:56:25.998760 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:25Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.018333 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.035385 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.050514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.065416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.100773 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.155977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.193292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.209346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209543 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209620 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:26 crc kubenswrapper[4183]: E0813 19:56:26.209937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.211758 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.231006 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.248508 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.277241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.296764 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.311566 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.326400 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.354032 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.379063 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.403086 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.429143 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.432332 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.432446 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.450079 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c723a139f23a3336e57ce6a056c468156774ec1fd4c2f072703214795be1d791\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:49Z\\\",\\\"message\\\":\\\"2025-08-13T19:54:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f\\\\n2025-08-13T19:54:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_151a07d5-a50c-4804-949d-5e97322c428f to /host/opt/cni/bin/\\\\n2025-08-13T19:54:04Z [verbose] multus-daemon started\\\\n2025-08-13T19:54:04Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:54:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.465152 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.483462 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.501607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:26 crc kubenswrapper[4183]: I0813 19:56:26.519010 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:26Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.209763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.210735 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210782 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210926 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.210987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211106 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211503 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.211570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211888 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.211972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212689 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212945 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.212950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.212992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.213651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.213901 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.214315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.214386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.215184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.215409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.215964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.216180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.216712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.217522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.224625 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.225390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:27 crc kubenswrapper[4183]: E0813 19:56:27.225536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.434736 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:27 crc kubenswrapper[4183]: I0813 19:56:27.435371 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.208964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209311 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.209632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209771 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.209946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.210054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.210573 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:28 crc kubenswrapper[4183]: E0813 19:56:28.211131 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.233990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.260299 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.357336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.418047 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.433269 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.433428 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.445871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.466549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.521455 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.542326 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.561741 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.593118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.612964 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.634608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.654884 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.674200 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.695472 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.712581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.728469 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.746766 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.769183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.791487 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.809622 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.832051 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.852544 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.892347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.916869 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.935146 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.956539 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:28 crc kubenswrapper[4183]: I0813 19:56:28.980600 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:28Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.025381 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.044215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.063505 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.081120 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.114123 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.135507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.151580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.169593 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.182709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.200942 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209872 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.209091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.210892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.210965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211563 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211584 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211664 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.211907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.211914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212213 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212495 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.212545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.212903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213627 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.213661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.213919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214306 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:29 crc kubenswrapper[4183]: E0813 19:56:29.214432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.223285 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.239358 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.254769 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.272688 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.289248 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.317186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.336041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.358950 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.388992 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.439765 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.439940 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.565093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.598019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.617523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.646586 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.693340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.718939 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.737067 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.755028 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.774198 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.794979 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.811977 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.831254 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.854229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.877495 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.900155 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.924133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.952726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.971984 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:29 crc kubenswrapper[4183]: I0813 19:56:29.996294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:29Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.021090 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:30Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.209411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.209755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.210564 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.211650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.211763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.433915 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:30 crc kubenswrapper[4183]: I0813 19:56:30.434596 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:30 crc kubenswrapper[4183]: E0813 19:56:30.509667 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209006 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209205 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.209868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210103 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210461 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210718 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210873 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.210947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211622 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.211899 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211917 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.211957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212444 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.212968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.212988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213776 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.213974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.214476 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.214991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:31 crc kubenswrapper[4183]: E0813 19:56:31.215155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.433423 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:31 crc kubenswrapper[4183]: I0813 19:56:31.433535 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209107 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209135 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210362 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209218 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210691 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209257 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.210896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.211035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.211216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.211417 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:32 crc kubenswrapper[4183]: E0813 19:56:32.212166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.433986 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:32 crc kubenswrapper[4183]: I0813 19:56:32.434187 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208512 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.208707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208305 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208330 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209144 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209534 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.208225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.208883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.209988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210002 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210061 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210135 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210622 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210867 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.210909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.210962 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.211097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.211979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.212003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212124 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.212426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:33 crc kubenswrapper[4183]: E0813 19:56:33.213990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.434643 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:33 crc kubenswrapper[4183]: I0813 19:56:33.434749 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.208744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.208749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209075 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:34 crc kubenswrapper[4183]: E0813 19:56:34.209719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.432745 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:34 crc kubenswrapper[4183]: I0813 19:56:34.432945 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173154 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173258 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173282 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.173312 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.190060 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195649 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195747 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195769 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.195884 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.208519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208914 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.208765 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209040 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.208877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209061 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209299 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209410 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209643 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.209941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.209966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210051 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210074 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210668 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.210964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211368 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.211716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.211925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212260 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.212688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.213037 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.213971 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.214317 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.218438 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.225371 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227084 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227234 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227378 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.227629 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.231321 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.255693 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.255997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263157 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263280 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263408 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263670 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.263763 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.281267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.285061 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291047 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291150 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291174 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291199 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.291235 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:35Z","lastTransitionTime":"2025-08-13T19:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.303167 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.305852 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.306098 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.319336 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.335442 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.350897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.368024 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.390210 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.407238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.427848 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.432689 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.432882 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.444910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.463133 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.479682 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.498515 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: E0813 19:56:35.511479 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.515038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.533761 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.551324 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.569990 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.588562 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.616902 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.631852 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.647608 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.663514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.679297 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.704639 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.723493 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.739076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.756040 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.771347 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.787684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.803370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.824609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.842581 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.860221 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.884089 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.914013 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.931507 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.946445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.965138 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:35 crc kubenswrapper[4183]: I0813 19:56:35.986679 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:35Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.004674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.021088 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.042220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.127137 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.148172 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.169532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.190295 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.209845 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:36 crc kubenswrapper[4183]: E0813 19:56:36.210115 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.213053 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.232737 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.250069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.266667 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.290307 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.312315 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.335611 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.355864 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.374987 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.397634 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.415379 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.432980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.433615 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.433816 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.451944 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.468602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.489233 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.506230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.524348 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.541674 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:36 crc kubenswrapper[4183]: I0813 19:56:36.558081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:36Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.208726 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.209856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209863 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.209974 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.210973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.211083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.211101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211418 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211449 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211458 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.212720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213187 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.213633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214105 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.214864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215609 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215667 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.215721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.216933 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:37 crc kubenswrapper[4183]: E0813 19:56:37.217693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.432029 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:37 crc kubenswrapper[4183]: I0813 19:56:37.432528 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.208992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209084 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.209086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.209343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209434 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.209679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.210472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:38 crc kubenswrapper[4183]: E0813 19:56:38.210999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.432474 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:38 crc kubenswrapper[4183]: I0813 19:56:38.432591 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209168 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.210132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209185 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209429 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209517 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209651 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209886 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209915 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.209968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.210683 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.211492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.211869 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.212552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.212864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.213982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214081 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.214755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.214957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215380 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.215412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.215855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.216182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216540 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.216950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:39 crc kubenswrapper[4183]: E0813 19:56:39.217018 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.431916 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:39 crc kubenswrapper[4183]: I0813 19:56:39.432043 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.143853 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.143985 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.208260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.208520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.209201 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.209738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210220 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210392 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210238 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.210272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.210975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.211098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.431658 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:40 crc kubenswrapper[4183]: I0813 19:56:40.431818 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:40 crc kubenswrapper[4183]: E0813 19:56:40.513339 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.208971 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209327 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209602 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209818 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.209950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209992 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210180 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209377 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209143 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209385 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210510 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210575 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.210695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209410 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.210943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211065 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211351 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211420 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.211447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211724 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.211955 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.212560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212744 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.212981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213189 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.213403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:41 crc kubenswrapper[4183]: E0813 19:56:41.213653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.432079 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:41 crc kubenswrapper[4183]: I0813 19:56:41.432175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.209321 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.210253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208570 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208616 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.208685 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211230 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.211576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.212181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:42 crc kubenswrapper[4183]: E0813 19:56:42.212232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.433717 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:42 crc kubenswrapper[4183]: I0813 19:56:42.433924 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209394 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210325 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.209681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.210766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.209885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.210981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211541 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.211585 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.211943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210153 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210178 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.210170 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212610 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.212913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.213041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213242 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.213520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.213867 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.214048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.214414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214520 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214594 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214725 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214850 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.214880 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215184 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215500 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:43 crc kubenswrapper[4183]: E0813 19:56:43.216009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.433489 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:43 crc kubenswrapper[4183]: I0813 19:56:43.433667 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209194 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.209893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.210576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.210735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.211042 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.211324 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:44 crc kubenswrapper[4183]: E0813 19:56:44.211923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.437341 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:44 crc kubenswrapper[4183]: I0813 19:56:44.437424 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.209479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.209758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210611 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.210733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.210959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211076 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211848 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.211966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.211893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212223 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212273 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.212515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212560 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212650 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.212918 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.213339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213654 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.213746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214202 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.214247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.214479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214717 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.214816 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215271 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.215704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.236102 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.256173 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.277871 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.299117 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.318504 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.337392 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.357907 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.377195 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.396886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.412724 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.430289 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.435076 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.435208 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.446585 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.464267 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.484097 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509656 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509724 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509744 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509768 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509888 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.509980 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.514750 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.525106 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531511 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531565 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531580 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531602 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.531631 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.532275 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.547266 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.549638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553289 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553351 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553390 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.553415 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.565862 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.567330 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572534 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572578 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572592 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572610 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.572631 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.582486 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.588900 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594104 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594190 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594209 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594230 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.594260 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:45Z","lastTransitionTime":"2025-08-13T19:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.602856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.609613 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: E0813 19:56:45.609669 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.620545 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.637701 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.654989 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.679207 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.707893 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.722575 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.737966 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.753518 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.780421 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.802770 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.819528 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.838022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.855218 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.876594 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.892557 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.909627 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.928076 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.946730 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.965170 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:45 crc kubenswrapper[4183]: I0813 19:56:45.984230 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:45Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.002926 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.020187 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.034956 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.049617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.067022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.084043 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.099638 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.113445 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.128618 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.143514 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.164310 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.182140 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.201018 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.208969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.209491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.209920 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210480 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.210704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:46 crc kubenswrapper[4183]: E0813 19:56:46.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.220238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.238069 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.254369 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.269416 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.285891 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.303135 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.317994 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.333931 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.352186 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.374635 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.391973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.404684 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.422394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.434053 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.434252 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:46 crc kubenswrapper[4183]: I0813 19:56:46.438904 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:46Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209132 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209314 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209277 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.209932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.209973 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210126 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210438 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210566 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.210960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211050 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211337 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.210256 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211448 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211733 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.211941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.211953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212290 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212512 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.212635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.212762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.213534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.213665 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.213960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214120 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214369 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:47 crc kubenswrapper[4183]: E0813 19:56:47.214514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.432962 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:47 crc kubenswrapper[4183]: I0813 19:56:47.433095 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.208561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209000 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.208641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:48 crc kubenswrapper[4183]: E0813 19:56:48.209585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.432899 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:48 crc kubenswrapper[4183]: I0813 19:56:48.433067 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209365 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.209112 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.209974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210727 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.210872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.210954 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211012 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211196 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211570 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211644 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.211728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.211984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212073 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212320 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.212666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.212983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213381 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213536 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213633 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.213961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:49 crc kubenswrapper[4183]: E0813 19:56:49.214022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.432293 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:49 crc kubenswrapper[4183]: I0813 19:56:49.432456 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.208852 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.209706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.209921 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210008 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.210140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.210432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.432382 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:50 crc kubenswrapper[4183]: I0813 19:56:50.432541 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:50 crc kubenswrapper[4183]: E0813 19:56:50.516577 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209242 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209247 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209559 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209714 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209879 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.209922 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.209885 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.210972 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211026 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211137 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211524 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211712 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.211960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.211993 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212320 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212407 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212551 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.212897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.212975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213058 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.213086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.213856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:51 crc kubenswrapper[4183]: E0813 19:56:51.214262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.432427 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:51 crc kubenswrapper[4183]: I0813 19:56:51.432549 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.006655 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.006914 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007002 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007056 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007111 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007161 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007253 4183 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007285 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007346 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007333 4183 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007415 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007349852 +0000 UTC m=+900.700014910 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007452 4183 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007494 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007472935 +0000 UTC m=+900.700137893 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007555 4183 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007558 4183 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007600 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007573378 +0000 UTC m=+900.700238106 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-key" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007603 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007616919 +0000 UTC m=+900.700281577 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007564 4183 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007479 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.00764336 +0000 UTC m=+900.700308568 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007679 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007694 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007726 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007697712 +0000 UTC m=+900.700363000 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007744 4183 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007755 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007741973 +0000 UTC m=+900.700406601 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007848 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.007768434 +0000 UTC m=+900.700433082 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.007890 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.007766 4183 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008004 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.00798148 +0000 UTC m=+900.700646438 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008009 4183 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008135 4183 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008195 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008199 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008180835 +0000 UTC m=+900.700845883 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008264 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008235867 +0000 UTC m=+900.700900535 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008297 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008343 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008349 4183 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.008400 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008451 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008478 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008513 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008520 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008510885 +0000 UTC m=+900.701175623 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008604 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008587467 +0000 UTC m=+900.701252145 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008630 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008618998 +0000 UTC m=+900.701283796 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.008640089 +0000 UTC m=+900.701304707 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.008978 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009009 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009084 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009069591 +0000 UTC m=+900.701734299 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009116 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009394 4183 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009448982 +0000 UTC m=+900.702114050 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009500 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009550 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009539094 +0000 UTC m=+900.702203912 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009604 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009649 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.009690 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.009681438 +0000 UTC m=+900.702346176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.009719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010074 4183 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010186 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010163532 +0000 UTC m=+900.702828450 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010283 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010190 4183 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010370 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010354507 +0000 UTC m=+900.703019306 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : object "openshift-service-ca"/"signing-cabundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010436 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.010466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010479 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010466141 +0000 UTC m=+900.703130849 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010543 4183 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010618 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010680 4183 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010714708 +0000 UTC m=+900.703379396 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010750 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010739388 +0000 UTC m=+900.703404176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.010859 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.010760249 +0000 UTC m=+900.703424927 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"oauth-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.112476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.112704 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.113506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.113475002 +0000 UTC m=+900.806139800 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"image-import-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.113656 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.113752 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114054 4183 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.113885 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114172 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114149981 +0000 UTC m=+900.806814759 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114216 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114200723 +0000 UTC m=+900.806865711 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114636 4183 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.114752 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.114736638 +0000 UTC m=+900.807401446 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"installation-pull-secrets" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115068 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115412 4183 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115522 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.11550687 +0000 UTC m=+900.808171738 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : object "openshift-route-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115852 4183 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.115913 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.115903241 +0000 UTC m=+900.808567859 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.115931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.116064 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.116148 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.116127588 +0000 UTC m=+900.808792386 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.117854 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.117958 4183 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118043 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.118027002 +0000 UTC m=+900.810691730 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.118196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118255 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.118321 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.11830789 +0000 UTC m=+900.810972508 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.208951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209183 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.209749 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.209940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.210078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.210173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.219661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219908 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219932 4183 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.219947 4183 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220003 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.219985583 +0000 UTC m=+900.912650341 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220048 4183 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220073 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220133 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220115147 +0000 UTC m=+900.912779965 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.219931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220477 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220530 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220517248 +0000 UTC m=+900.913182066 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220584 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220661 4183 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220689 4183 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220732 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220716864 +0000 UTC m=+900.913381672 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.220869 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220884 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220868368 +0000 UTC m=+900.913533096 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220929 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220968 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.220958481 +0000 UTC m=+900.913623099 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.220989 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221100 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221070114 +0000 UTC m=+900.913735362 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221111 4183 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221135 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221157 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221148166 +0000 UTC m=+900.913812784 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221178 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221166597 +0000 UTC m=+900.913831405 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221181 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221274 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221314 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221333 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221344 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221359 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221377 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221366163 +0000 UTC m=+900.914030831 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221432 4183 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221482 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221471486 +0000 UTC m=+900.914136104 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221519 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221563 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221554898 +0000 UTC m=+900.914219516 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221565 4183 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221523 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221675 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221649361 +0000 UTC m=+900.914314619 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221708 4183 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221742 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221734153 +0000 UTC m=+900.914398781 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.221744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221857 4183 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.221893 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.221881467 +0000 UTC m=+900.914546075 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222211 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222279 4183 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222294 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222324 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22231453 +0000 UTC m=+900.914979268 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222386 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222425 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222416213 +0000 UTC m=+900.915080961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222485 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222513 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222549 4183 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222590 4183 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222639 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222612898 +0000 UTC m=+900.915278156 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"audit" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222678 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222708 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222699821 +0000 UTC m=+900.915364439 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.222711 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222720 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222744 4183 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222645 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222680 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222876 4183 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222923 4183 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222936 4183 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.222759 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.222748762 +0000 UTC m=+900.915413510 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223003 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223036 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223005269 +0000 UTC m=+900.915670557 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-serving-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223056 4183 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223095 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223085082 +0000 UTC m=+900.915749840 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223119 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223108992 +0000 UTC m=+900.915773820 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223168 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223156104 +0000 UTC m=+900.915820872 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223203 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223197175 +0000 UTC m=+900.915861773 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223257 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223289 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223635 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223702 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223729 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223730 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223822 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223765551 +0000 UTC m=+900.916430189 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-session" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223891 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.223922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223928 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223919406 +0000 UTC m=+900.916584224 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223926 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223952 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223956 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.223943306 +0000 UTC m=+900.916608074 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223994 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224006 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224012 4183 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224018 4183 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224032 4183 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224036 4183 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223764 4183 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224063 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224054189 +0000 UTC m=+900.916718937 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224082 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22407219 +0000 UTC m=+900.916736778 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224096 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224136 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224126661 +0000 UTC m=+900.916791470 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224050 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224155 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224146232 +0000 UTC m=+900.916811110 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223962 4183 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224351 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224334437 +0000 UTC m=+900.916999055 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.223673 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224402 4183 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224421 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224454 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224492 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224465081 +0000 UTC m=+900.917130289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224510 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224354 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224567 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224555324 +0000 UTC m=+900.917220292 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224588 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224594 4183 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224602 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224646 4183 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224619 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.224665 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.224648936 +0000 UTC m=+900.917313854 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224751 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.224978 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225024 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225023 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225121 4183 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225153 4183 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225171 4183 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225182 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d7ntf for pod openshift-service-ca/service-ca-666f99b6f-vlbxv: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225064 4183 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225088 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225078049 +0000 UTC m=+900.917742637 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225340 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225257594 +0000 UTC m=+900.917922212 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225361 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225353616 +0000 UTC m=+900.918018505 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"openshift-global-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225376 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225370147 +0000 UTC m=+900.918034735 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"encryption-config-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225491 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf podName:378552fd-5e53-4882-87ff-95f3d9198861 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.2254799 +0000 UTC m=+900.918144508 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d7ntf" (UniqueName: "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf") pod "service-ca-666f99b6f-vlbxv" (UID: "378552fd-5e53-4882-87ff-95f3d9198861") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225728 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225857 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.225761968 +0000 UTC m=+900.918426596 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.225951 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.225988 4183 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226016 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226051 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226095 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226084517 +0000 UTC m=+900.918749135 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226147 4183 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226161 4183 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226171 4183 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226202 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22619228 +0000 UTC m=+900.918857029 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226228 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.226298 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226321 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226328 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226402 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226383836 +0000 UTC m=+900.919048754 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226407 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226427 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226437 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226445 4183 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226454 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226317 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226484 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226475289 +0000 UTC m=+900.919140117 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226502 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226520 4183 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226525 4183 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226539 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22652795 +0000 UTC m=+900.919192698 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226565 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226576 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226559291 +0000 UTC m=+900.919224159 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226403 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226579 4183 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226650 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226622883 +0000 UTC m=+900.919287951 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226654 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226725 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226708335 +0000 UTC m=+900.919373353 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.226963 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.226949922 +0000 UTC m=+900.919614530 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227122 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227311 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227362 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227387 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227412 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227439 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227481 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227504 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227554 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227582 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227609 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227638 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.227671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227748 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227855 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.227771016 +0000 UTC m=+900.920435644 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227905 4183 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227931 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.22792461 +0000 UTC m=+900.920589218 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.227981 4183 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228004 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.227997002 +0000 UTC m=+900.920661710 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228036 4183 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228059 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228053564 +0000 UTC m=+900.920718172 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228090 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228112 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228105755 +0000 UTC m=+900.920770363 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228141 4183 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228165 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228157837 +0000 UTC m=+900.920822455 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228194 4183 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228215 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228209228 +0000 UTC m=+900.920873836 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : object "openshift-controller-manager"/"client-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228261 4183 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228273 4183 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228281 4183 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228305 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228298471 +0000 UTC m=+900.920963289 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228334 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228355 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228349052 +0000 UTC m=+900.921013660 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228391 4183 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228412 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228406364 +0000 UTC m=+900.921070982 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228453 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228465 4183 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228472 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228498 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228491986 +0000 UTC m=+900.921156594 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228539 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228549 4183 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228556 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hpzhn for pod openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228581 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn podName:af6b67a3-a2bd-4051-9adc-c208a5a65d79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228574319 +0000 UTC m=+900.921239027 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hpzhn" (UniqueName: "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn") pod "route-controller-manager-5c4dbb8899-tchz5" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228626 4183 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228637 4183 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228644 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r8qj9 for pod openshift-apiserver/apiserver-67cbf64bc9-mtx25: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228671 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9 podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228659891 +0000 UTC m=+900.921324499 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r8qj9" (UniqueName: "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228711 4183 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.228735 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.228728573 +0000 UTC m=+900.921393191 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329202 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329264 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329278 4183 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329357 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.329338005 +0000 UTC m=+901.022002744 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.329541 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329903 4183 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.329981 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.329966463 +0000 UTC m=+901.022631191 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.329692 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330253 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330334 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330547 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330543 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330599 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330613 4183 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330668 4183 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330686 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.330664603 +0000 UTC m=+901.023329401 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330705 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.330697124 +0000 UTC m=+901.023361752 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"console-oauth-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330710 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330751 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330751 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.330579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331014 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330763 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331114 4183 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331128 4183 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331137 4183 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-585546dd8b-v5m4t: object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330876 4183 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.330894 4183 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331167 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331156437 +0000 UTC m=+901.023821265 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331303 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331348 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331325322 +0000 UTC m=+901.023990270 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"image-registry-tls" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331484 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331492 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331506 4183 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331517 4183 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331549 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331522768 +0000 UTC m=+901.024187686 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331592 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331574929 +0000 UTC m=+901.024239927 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331618 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331624 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33160964 +0000 UTC m=+901.024274538 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331647 4183 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331665 4183 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331690 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331728 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331710023 +0000 UTC m=+901.024374961 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331766 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331853 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331866 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.331902 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.331892878 +0000 UTC m=+901.024557616 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.331943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332039 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332094 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332094 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332066413 +0000 UTC m=+901.024731411 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332108 4183 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332148 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332146 4183 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332299 4183 projected.go:200] Error preparing data for projected volume kube-api-access-pzb57 for pod openshift-controller-manager/controller-manager-6ff78978b4-q4vv8: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332337 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57 podName:87df87f4-ba66-4137-8e41-1fa632ad4207 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332326781 +0000 UTC m=+901.024991529 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pzb57" (UniqueName: "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57") pod "controller-manager-6ff78978b4-q4vv8" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332336 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332422 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332407643 +0000 UTC m=+901.025072371 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332426 4183 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332462 4183 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332476 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332467955 +0000 UTC m=+901.025132543 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332480 4183 projected.go:200] Error preparing data for projected volume kube-api-access-w4r68 for pod openshift-authentication/oauth-openshift-765b47f944-n2lhl: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.332538 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68 podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.332520526 +0000 UTC m=+901.025185474 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4r68" (UniqueName: "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.332967 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.333046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333088 4183 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333131 4183 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333138 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333126874 +0000 UTC m=+901.025791612 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333204 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333187085 +0000 UTC m=+901.025851913 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333382 4183 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.333470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333483 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.333459553 +0000 UTC m=+901.026124201 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333608 4183 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.333714 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit podName:23eb88d6-6aea-4542-a2b9-8f3fd106b4ab nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33369312 +0000 UTC m=+901.026358068 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit") pod "apiserver-67cbf64bc9-mtx25" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab") : object "openshift-apiserver"/"audit-1" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334103 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334178 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334213 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334231 4183 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334268 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334313 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334293467 +0000 UTC m=+901.026958395 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334331 4183 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334378 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334367869 +0000 UTC m=+901.027032467 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334391 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334420 4183 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334438 4183 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334482 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334497 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334480572 +0000 UTC m=+901.027145610 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334565 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334609 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca podName:c5bb4cdd-21b9-49ed-84ae-a405b60a0306 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334597986 +0000 UTC m=+901.027262704 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334649 4183 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334673 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.334666598 +0000 UTC m=+901.027331216 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334720 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334731 4183 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334741 4183 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334850 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33475951 +0000 UTC m=+901.027424138 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.334731 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.334983 4183 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335040 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335058 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335035338 +0000 UTC m=+901.027700366 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : object "openshift-console"/"service-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335094 4183 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335111 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335123 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33511498 +0000 UTC m=+901.027779708 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335279 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335282 4183 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335347 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335330287 +0000 UTC m=+901.027995095 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335423 4183 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335453 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335455 4183 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335482 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335501 4183 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335470 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert podName:13ad7555-5f28-4555-a563-892713a8433a nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.33546074 +0000 UTC m=+901.028125468 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert") pod "oauth-openshift-765b47f944-n2lhl" (UID: "13ad7555-5f28-4555-a563-892713a8433a") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335467 4183 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335430 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335645 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335632105 +0000 UTC m=+901.028296823 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335724 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335704677 +0000 UTC m=+901.028378065 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.335768 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.335758249 +0000 UTC m=+901.028422837 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.335945 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336215 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336244 4183 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336259 4183 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336295 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336286404 +0000 UTC m=+901.028951022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336329 4183 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336347 4183 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336355 4183 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336386 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336377506 +0000 UTC m=+901.029042124 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336499 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.336575 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336757 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336878 4183 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336890 4183 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.336951 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.336934972 +0000 UTC m=+901.029599590 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.337359 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.337347574 +0000 UTC m=+901.030012222 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.433241 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.433378 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.438884 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439174 4183 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439253 4183 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439270 4183 projected.go:200] Error preparing data for projected volume kube-api-access-lz9qh for pod openshift-console/console-84fccc7b6-mkncc: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439269 4183 projected.go:294] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439297 4183 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/revision-pruner-8-crc: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439367 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh podName:b233d916-bfe3-4ae5-ae39-6b574d1aa05e nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.439346057 +0000 UTC m=+901.132010795 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz9qh" (UniqueName: "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh") pod "console-84fccc7b6-mkncc" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.439384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.439395 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access podName:72854c1e-5ae2-4ed6-9e50-ff3bccde2635 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.439385318 +0000 UTC m=+901.132050036 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access") pod "revision-pruner-8-crc" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: I0813 19:56:52.439906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440236 4183 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440307 4183 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440325 4183 projected.go:200] Error preparing data for projected volume kube-api-access-r7dbp for pod openshift-marketplace/redhat-marketplace-rmwfn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:52 crc kubenswrapper[4183]: E0813 19:56:52.440416 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp podName:9ad279b4-d9dc-42a8-a1c8-a002bd063482 nodeName:}" failed. No retries permitted until 2025-08-13 19:58:54.440395007 +0000 UTC m=+901.133059735 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r7dbp" (UniqueName: "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp") pod "redhat-marketplace-rmwfn" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212906 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.212960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.212994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213430 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213590 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213708 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213711 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.213911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.213964 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214251 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214531 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214572 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.214698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214969 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.214743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215093 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215094 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215228 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215279 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215609 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215763 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215860 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.215976 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216043 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.216123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.215721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.216421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.217017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217173 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.217958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218232 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218510 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218841 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.218961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:53 crc kubenswrapper[4183]: E0813 19:56:53.219909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.433185 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:53 crc kubenswrapper[4183]: I0813 19:56:53.433295 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.208760 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209229 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209315 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209532 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209665 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.209987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.210082 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:54 crc kubenswrapper[4183]: E0813 19:56:54.210473 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.433286 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.433513 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677470 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677664 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677901 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.677967 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:56:54 crc kubenswrapper[4183]: I0813 19:56:54.678012 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221052 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.221507 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.223593 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.223762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.224019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220611 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.220700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227109 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227587 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227631 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.227961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.227977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228038 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.228360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.228497 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.229344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230307 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230538 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.230699 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.230946 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.230987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231152 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231234 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231313 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231555 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.231698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231754 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.231857 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232373 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232603 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.232654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232727 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.232762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.232941 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233028 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233249 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233452 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233478 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233740 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.233923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.234073 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.256464 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.308340 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.338906 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.367109 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.394532 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.417519 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.432527 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.432662 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.439166 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.462444 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.483103 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.503670 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.518104 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.522390 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.542206 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.572258 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.589877 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.607284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.625441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.648886 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.670923 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.689145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.708963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.727056 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.741697 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.757192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.776238 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.793569 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.819525 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.841751 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850661 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850729 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850743 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850767 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.850873 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.860895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.867695 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.874961 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875035 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875089 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875110 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.875188 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.883106 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.892060 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899612 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899633 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899727 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.899765 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.911897 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.921211 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.928921 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929036 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929293 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929323 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.929350 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.931659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.949509 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955321 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955640 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.955549 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956297 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956380 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.956414 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:56:55Z","lastTransitionTime":"2025-08-13T19:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.976862 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:55 crc kubenswrapper[4183]: E0813 19:56:55.976953 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:56:55 crc kubenswrapper[4183]: I0813 19:56:55.982193 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.002440 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:55Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.022208 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.041617 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.062279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.079343 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.095441 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.112237 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.131963 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.147240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.169278 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.186660 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.203240 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.211598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.211895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212091 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212236 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212270 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.212102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.211896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212641 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212857 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.212997 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.213071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.215348 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:56:56 crc kubenswrapper[4183]: E0813 19:56:56.216416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.221740 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.240602 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.258093 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.276651 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.295134 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.316546 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.337316 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.361370 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.382947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.397817 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.418685 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.432578 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.432737 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.439396 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.458045 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.476726 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.492223 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.511080 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.531878 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.550300 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.570007 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.585535 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.602639 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:56 crc kubenswrapper[4183]: I0813 19:56:56.619589 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:56:56Z is after 2024-12-26T00:46:02Z" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208958 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.208965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209077 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209094 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209140 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209107 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209243 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209949 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209979 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.209995 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.210877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210876 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210958 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.210968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211087 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211492 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211565 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211607 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.211985 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212514 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212542 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212574 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212685 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.212967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:57 crc kubenswrapper[4183]: E0813 19:56:57.213244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.433093 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:57 crc kubenswrapper[4183]: I0813 19:56:57.433217 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.209191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.209528 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.211262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.210269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.211273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.210569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.212025 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:56:58 crc kubenswrapper[4183]: E0813 19:56:58.212086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.432538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:58 crc kubenswrapper[4183]: I0813 19:56:58.433036 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209888 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.210031 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210048 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209237 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209342 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210188 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210354 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209599 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209626 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.209297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211403 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211719 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.211939 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212010 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212108 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212583 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.213310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213463 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213636 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.213916 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214382 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214495 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.214639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.216074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:56:59 crc kubenswrapper[4183]: E0813 19:56:59.217427 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.433400 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:56:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:56:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:56:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:56:59 crc kubenswrapper[4183]: I0813 19:56:59.433499 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.208202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.208525 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.208732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.208967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.209666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.209911 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.210030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.210239 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.434039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:00 crc kubenswrapper[4183]: I0813 19:57:00.434164 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:00 crc kubenswrapper[4183]: E0813 19:57:00.520077 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209349 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209466 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209658 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.209967 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.209964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210155 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210375 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210458 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210669 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.210900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.210967 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211021 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211095 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211442 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211576 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211626 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.211698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.211752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212467 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212652 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.212701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.212937 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.213306 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213422 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.213475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.213679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214270 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214490 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214591 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.214642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:01 crc kubenswrapper[4183]: E0813 19:57:01.214974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.432910 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:01 crc kubenswrapper[4183]: I0813 19:57:01.433091 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.209747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209249 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.210257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209278 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209308 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209344 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.209375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.210957 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:02 crc kubenswrapper[4183]: E0813 19:57:02.211612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.433625 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:02 crc kubenswrapper[4183]: I0813 19:57:02.433761 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208403 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208462 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208484 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209568 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208531 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209947 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210246 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210688 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208668 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.210930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208666 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211132 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208697 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208700 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208719 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208735 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208869 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208870 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.208928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.209343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211468 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211623 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.211996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212082 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212406 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:03 crc kubenswrapper[4183]: E0813 19:57:03.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.432538 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:03 crc kubenswrapper[4183]: I0813 19:57:03.432657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208597 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.208724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.208961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.209003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.209264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209338 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.209670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:04 crc kubenswrapper[4183]: E0813 19:57:04.210439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.432401 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:04 crc kubenswrapper[4183]: I0813 19:57:04.432498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208644 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208742 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.208865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.208996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209207 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209335 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209390 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209448 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209498 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209595 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209657 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209875 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.209877 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.209929 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210276 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210676 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210750 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210846 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.210907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.210914 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.211067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211454 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211518 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211700 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.211917 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212233 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212703 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212764 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.212966 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213045 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.213179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.213297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.213555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214097 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.214731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.215909 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.216304 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.233012 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.251386 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.276211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.308609 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.328322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.346304 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.370438 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.389476 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.411373 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.426415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.431598 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.431712 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.444541 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.462377 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.480582 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.497925 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.514520 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: E0813 19:57:05.521723 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.532124 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:54:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.551449 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.572145 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.588510 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.608414 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.624842 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.645997 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.663466 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.683937 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.704973 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.726118 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.747703 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.768918 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.791175 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.808263 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.825492 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.849322 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.866211 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.884022 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.901126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.920748 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.937371 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.954083 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.973215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:05 crc kubenswrapper[4183]: I0813 19:57:05.991405 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:05Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.011450 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.027856 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.049470 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.068168 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.090284 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.106888 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.127328 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.148762 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.167735 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.187452 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.205447 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.208991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.209338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209513 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209918 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.209989 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.210098 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.265709 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299008 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299521 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299623 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.299722 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.314215 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.338715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.339053 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345370 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345449 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345467 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345485 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.345505 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.364440 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.376755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378718 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378909 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378934 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378961 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.378999 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.400566 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406093 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406175 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406200 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406227 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.406252 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.411665 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.422309 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426488 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426589 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426612 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426641 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.426678 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:06Z","lastTransitionTime":"2025-08-13T19:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.432381 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.432490 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.434738 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.443050 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"ephemeral-storage\\\":\\\"76397865653\\\",\\\"memory\\\":\\\"13831544Ki\\\"},\\\"capacity\\\":{\\\"ephemeral-storage\\\":\\\"83295212Ki\\\",\\\"memory\\\":\\\"14292344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-08-13T19:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\"],\\\"sizeBytes\\\":2572133253},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1174d995af37ff8e5d8173276afecf16ec20e594d074ccd21d1d944b5bdbba05\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4d4d0edd652ff5b78c2704a4f537be106c9234d6cbd951ae2a461194fb88b1c6\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"],\\\"sizeBytes\\\":2121001615},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:6a58359b0d36a5a73982ca12769ac45681fbe70b0cdd8d5aed90eb425dfe3b2b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:bc4ee69425a59a9d92c27ee511fc281057ed7bff497c2a4fc2d9935e6c367fe3\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1374511543},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\"],\\\"sizeBytes\\\":1346691049},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\"],\\\"sizeBytes\\\":1222078702},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3dc5bbedad8cec4f9184d1405a7c54e649fce3ec681bbab1d2f948a5bf36c44f\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4bff896b071099ebb4f6a059f5c542cb373ac8575e17309af9fc9cf349956aa1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"],\\\"sizeBytes\\\":1116811194},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\"],\\\"sizeBytes\\\":1067242914},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:3507cb8b73aa1b88cf9d9e4033e915324d7db7e67547a9ac22e547de8611793f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:c0bbc686ed725ea089fb6686df8a6a119d6a9f006dc50b06c04c9bb0ccf6512d\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"],\\\"sizeBytes\\\":993487271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\"],\\\"sizeBytes\\\":874809222},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\"],\\\"sizeBytes\\\":829474731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\"],\\\"sizeBytes\\\":826261505},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\"],\\\"sizeBytes\\\":823328808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0aa8e5d7a7b4c6e7089fee8c2fcfd4ac66dd47b074701824b69319cfae8435e2\\\"],\\\"sizeBytes\\\":775169417},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\"],\\\"sizeBytes\\\":685289316},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\"],\\\"sizeBytes\\\":677900529},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\"],\\\"sizeBytes\\\":654603911},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\"],\\\"sizeBytes\\\":596693555},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\"],\\\"sizeBytes\\\":568208801},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\"],\\\"sizeBytes\\\":562097717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\"],\\\"sizeBytes\\\":541135334},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\"],\\\"sizeBytes\\\":539461335},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:2e2e16ed863d85008fdd9d23e601f620ec149ea4f1d79bc44449ba7a8ad6d2b8\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:79fb5507de16adabfad5cf6bc9c06004a0eebe779bf438ef3a101735d2c205c9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":520763795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\"],\\\"sizeBytes\\\":507363664},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\"],\\\"sizeBytes\\\":503433479},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\"],\\\"sizeBytes\\\":503286020},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\"],\\\"sizeBytes\\\":502054492},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\"],\\\"sizeBytes\\\":501535327},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\"],\\\"sizeBytes\\\":501474997},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\"],\\\"sizeBytes\\\":499981426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\"],\\\"sizeBytes\\\":498615097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\"],\\\"sizeBytes\\\":498403671},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\"],\\\"sizeBytes\\\":497554071},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\"],\\\"sizeBytes\\\":497168817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\"],\\\"sizeBytes\\\":497128745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\"],\\\"sizeBytes\\\":496236158},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\"],\\\"sizeBytes\\\":495929820},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\"],\\\"sizeBytes\\\":494198000},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\"],\\\"sizeBytes\\\":493495521},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\"],\\\"sizeBytes\\\":492229908},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\"],\\\"sizeBytes\\\":488729683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\"],\\\"sizeBytes\\\":487322445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\"],\\\"sizeBytes\\\":484252300},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\"],\\\"sizeBytes\\\":482197034},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\"],\\\"sizeBytes\\\":481069430},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:d4ae187242ec50188e765b3cad94c07706548600d888059acf9f18cc4e996dc6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:f8b01d4bc2db4bf093788f2a7711037014338ce6e3f243036fe9c08dade252d6\\\",\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\"],\\\"sizeBytes\\\":476206289},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:450797700afd562ba3f68a8c07b723b5d2fec47f48d20907d60b567aca8b802f\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e79d574eda09fd6b39c17759605e5ea1e577b8008347c7824ec7a47fd1f8f815\\\",\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\"],\\\"sizeBytes\\\":473948807},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\"],\\\"sizeBytes\\\":469995872},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\"],\\\"sizeBytes\\\":469119456},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\"],\\\"sizeBytes\\\":466544831},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\"],\\\"sizeBytes\\\":464091925}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7bac8de7-aad0-4ed8-a9ad-c4391f6449b7\\\",\\\"systemUUID\\\":\\\"b5eaf2e9-3c86-474e-aca5-bab262204689\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: E0813 19:57:06.443135 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.450719 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.468276 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.486451 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.504126 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.523291 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.542000 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.562857 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.577177 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.599599 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:06 crc kubenswrapper[4183]: I0813 19:57:06.616132 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:06Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208625 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.208864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209322 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209287 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.208573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209436 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209579 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209588 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209674 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209698 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209710 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.209882 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.209931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210236 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210369 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210415 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210483 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210486 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210624 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210682 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.210736 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.210767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211634 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.211655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211721 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.211919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.212678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.212763 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.212894 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.213966 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.213995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214038 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214215 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214278 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214416 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.214964 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215062 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215303 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.215572 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215580 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.215949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:07 crc kubenswrapper[4183]: E0813 19:57:07.216154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.432008 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:07 crc kubenswrapper[4183]: I0813 19:57:07.432121 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208694 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209200 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208878 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.208923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209352 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.209440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.209963 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210088 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210231 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.210331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.210454 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:08 crc kubenswrapper[4183]: E0813 19:57:08.212519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.432324 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:08 crc kubenswrapper[4183]: I0813 19:57:08.432413 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209260 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209359 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209460 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209491 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209627 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209678 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209683 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.209889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.209986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210102 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210266 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210303 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210615 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.210907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.210988 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211067 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211436 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211535 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211628 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.211762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.211988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212100 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.212317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.212372 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.213597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.214390 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.214460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214576 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.214743 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215470 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215961 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.215990 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216091 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216686 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:09 crc kubenswrapper[4183]: E0813 19:57:09.216893 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.432312 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:09 crc kubenswrapper[4183]: I0813 19:57:09.432422 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144195 4183 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144311 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.144370 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.145382 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.145608 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665" gracePeriod=600 Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208483 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.209643 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208526 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208551 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.208577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.209696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210553 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.210599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.212723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.433145 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.433281 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:10 crc kubenswrapper[4183]: E0813 19:57:10.524136 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577107 4183 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665" exitCode=0 Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577246 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665"} Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577516 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"afce55cdf18c49434707644f949a34b08fce40dba18e4191658cbc7d2bfeb9fc"} Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.577545 4183 scope.go:117] "RemoveContainer" containerID="9793e20b91e9b56bf36351555f0fa13732f38f7c0e501af8b481f9ad2d08e4f9" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.601156 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5a3a50ec641063c0e1f3fc43240ceca65b0ac8e04564a4f69a62288e1930b2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-788b7c6b6c-ctdmb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.620676 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-q88th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"475321a1-8b7e-4033-8f72-b05a8b377347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:56:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:56:14Z\\\",\\\"message\\\":\\\"2025-08-13T19:55:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61\\\\n2025-08-13T19:55:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbb7e047-911e-45ca-8d98-7b9cca149b61 to /host/opt/cni/bin/\\\\n2025-08-13T19:55:29Z [verbose] multus-daemon started\\\\n2025-08-13T19:55:29Z [verbose] Readiness Indicator file check\\\\n2025-08-13T19:56:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-q88th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.638035 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fb762d1-812f-43f1-9eac-68034c1ecec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1e9cd3f235daca20a91dacb18cf04855fbc96733bcd2d62bf81ced55a888ac4\\\",\\\"image\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"imageID\\\":\\\"quay.io/crcont/ocp-release@sha256:65efcc4be5509483168263ee09cbedf25ece6d8e13e302b01754aa6835d4ea69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-6d5d9649f6-x6d46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.653861 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l92hr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3ebbd63a067d55279438986a1626528505555c144c3a154b2ef9b78a804917\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-image-registry\"/\"node-ca-l92hr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.672057 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed024e5d-8fc2-4c22-803d-73f3c9795f19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-78d54458c4-sc8h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.689895 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43853133e59a34528c9018270d1f3b7952c38126adc543ec1c49573ad8f92519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2024-06-27T13:25:33Z\\\",\\\"message\\\":\\\"an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821312 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.RoleBinding ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821367 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Service ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821402 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821488 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0627 13:25:33.821752 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2024-06-27T13:25:33.824Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-06-27T13:23:33Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":9,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.708523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5c38ff-1fa8-4219-994d-15776acd4a4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-768d5b5d86-722mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.727279 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b54e8941-2fc4-432a-9e51-39684df9089e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-7769bd8d7d-q5cvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.741845 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c678cfe3567d86af60bc7afa2a84a47516a8280d9e98103459b4a538206b85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.757649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9be16632cd8189dc7394ad78555ba32b3fce248282f388f8abbee4582a497f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afce55cdf18c49434707644f949a34b08fce40dba18e4191658cbc7d2bfeb9fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:57:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:57:10Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.777734 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"image-registry-585546dd8b-v5m4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.798631 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-7287f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"887d596e-c519-4bfa-af90-3edd9e1b2f0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-7287f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.817019 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-84fccc7b6-mkncc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-84fccc7b6-mkncc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.833558 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120b38dc-8236-4fa6-a452-642b8ad738ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-76788bff89-wkjgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.849947 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd556935-a077-45df-ba3f-d42c39326ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-8464bcc55b-sjnqz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.893332 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a863bc58eb8c5e6e566e800c24144011491c153110f62fdb112d5e33cebe615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b271d0faf90a64404377db2596c047849cba5d2f090c418ee21bdbb7c6ce5303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.910649 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b6d14a5-ca00-40c7-af7a-051a98a24eed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fe89592ae34affec07e6bf7041a0deddf56cd946e140285a2523c52bad453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:16Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-wwpnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.936523 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e53e26d-e94d-45dc-b706-677ed667c8ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.957273 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc291782-27d2-4a74-af79-c7dcb31535d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-network-operator\"/\"network-operator-767c585db5-zd56b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.976704 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:10 crc kubenswrapper[4183]: I0813 19:57:10.998571 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c085412c-b875-46c9-ae3e-e6b0d8067091\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6d8474f75f-x54mh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:10Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.020229 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"530553aa-0a1d-423e-8a22-f5eb4bdbb883\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-77658b5b66-dq5sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.038163 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6c7c885997-4hbbc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.068755 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87df87f4-ba66-4137-8e41-1fa632ad4207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-6ff78978b4-q4vv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.087412 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af6b67a3-a2bd-4051-9adc-c208a5a65d79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-5c4dbb8899-tchz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.103659 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33b7f421-18ed-4980-bd54-2fec77176e75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fd903cdf088cfa900c26e875537eea07b9468052d9f40c27a340d7dca7cc5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6567ad7640d3428891ccd4aa8b7478cf539b21746181b3594b1d249a3bf595b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.127081 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-84d578d794-jw7r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.145394 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c3c2223e85e89c657ef6687dc57f1075aa0d16e5f1cccebc9f6a48911233b46\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-8b455464d-f9xdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.164353 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71af81a9-7d43-49b2-9287-c375900aa905\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.183260 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.199646 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0f40333-c860-4c04-8058-a0bf572dcf12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5c5478f8c-vqvt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208908 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209179 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.208997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209441 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209637 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209645 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209583 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209944 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.209981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209152 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210149 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.209025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210396 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210344 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210559 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210565 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210632 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210637 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210715 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210855 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.210894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.210924 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211040 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211240 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211615 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.211746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.211897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212011 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212312 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.212384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212530 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.212902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.213097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213141 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213171 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.213907 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214404 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:11 crc kubenswrapper[4183]: E0813 19:57:11.214544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.224234 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34a48baf-1bee-4921-8bb2-9b7320e76f79\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-v54bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.243038 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/community-operators-8jhz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f4dca86-e6ee-4ec9-8324-86aff960225e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-8jhz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.258305 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [pruner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"pruner\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.274488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10603adc-d495-423c-9459-4caa405960bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9941e996bbf90d104eb2cad98bdaed8353e6c83a4ac1c34e9c65e6b1ac40fcc3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-75f687757b-nz2xb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.289129 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/node-resolver-dn27q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a23c0ee-5648-448c-b772-83dced2891ce\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ab14f8e790b04a3f595c5d086c2e9320eb4558fa34f382ae3616a8a6f1ffe79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:44Z\\\"}}}]}}\" for pod \"openshift-dns\"/\"node-resolver-dn27q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.309220 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c782cf62-a827-4677-b3c2-6f82c5f09cbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-8s8pc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.326294 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a5ae51d-d173-4531-8975-f164c975ce1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-857456c46-7f5wf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.345580 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.365715 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:51:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c2a9093234c492e37c3e2379036aeb947a35b37f909cf844f4e86cc0212bf6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:51:12Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f649864f0b71bb704fb7327709cdfe9ad128a95fa4ba9b372e3546ac75e5a7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54e6261beb529599e02d64c3b83ab1b6a8701fedea3b5fed323923589d377b87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ee8064ec173159b687104e067bf2f4030c3f956a26851c102fe621cb2f1fdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://018119cc8d6e568949cc0f8c1eb60431b4ab15c35b176a350482dffb1a1154a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3367e8deffd49b143a1b3c6f72a96a3d1a313565f0e18be8351cce5c5a263c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e337c170208f915c4604859bfad2c8c70990e952bd948d3192c1023f1b0a2be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:51:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:51:11Z\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.383415 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51a02bbf-2d40-4f84-868a-d399ea18a846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0cffd60c6b43a0eb1f5bc2c37c36c0353f97c3188e918a561f00e68620f66050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-7xghp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.407496 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:56:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:53:29Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.424740 4183 status_manager.go:877] "Failed to update status for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12e733dd-0939-4f1b-9cbb-13897e093787\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [hostpath-provisioner node-driver-registrar liveness-probe csi-provisioner]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/container-native-virtualization/hostpath-csi-driver-rhel9:v4.13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"hostpath-provisioner\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-livenessprobe:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"liveness-probe\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/openshift4/ose-csi-node-driver-registrar:latest\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"hostpath-provisioner\"/\"csi-hostpathplugin-hvm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.431965 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.432058 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.444002 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.465876 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13ad7555-5f28-4555-a563-892713a8433a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f136230f9e7a63c993c9a5ee689c6fc3fc2c74c31de42ea51b0680765693f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-765b47f944-n2lhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.482488 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.500487 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e2c5a70fe7e9b625f5ef26f458c54d20eb41da9ac60e96442f3a33dacfae5ce\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-7c88c4c865-kn67m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.518607 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1a8966-f594-490a-9fbb-eec5bafd13d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30f6d30b6bd801c455b91dc3c00333ffa9eec698082510d7abd3ad266d0de5a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-f7c6d88df-q2fnv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.535174 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f394926-bdb9-425c-b36e-264d7fd34550\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7978d7d7f6-2nt8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.559292 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e19f9e8-9a37-4ca8-9790-c219750ab482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:51Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T19:54:47Z\\\",\\\"message\\\":\\\"10.217.4.108:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {9037868a-bf59-4e20-8fc8-16e697f234f6}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0813 19:54:47.532432 19713 network_attach_def_controller.go:166] Shutting down network-controller-manager NAD controller\\\\nI0813 19:54:47.531652 19713 reflector.go:295] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532671 19713 reflector.go:295] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532752 19713 reflector.go:295] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:159\\\\nI0813 19:54:47.532927 19713 ovnkube.go:581] Stopped ovnkube\\\\nI0813 19:54:47.532945 19713 reflector.go:295] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0813 19:54:47.532868 19713 reflector.go:295] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0813 19:54:47.532892 19713 reflector.go:295] Stoppin\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}}},{\\\"containerID\\\":\\\"cri-o://c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:59Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:50:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:42Z\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-44qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.579424 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bacb25d-97b6-4491-8fb4-99feae1d802a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:26660173efd872a01c061efc0bd4a2b08beb4e5d63e3d7636ec35ddcf5d3c1fa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-69c565c9b6-vbdpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.600041 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.614910 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-5dbbc74dc9-cp5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.631988 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.649929 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:730f5b20164dd87b074b356636cdfa4848f1159b412ccf7e09ab0c4554232730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-686c6c748c-qbnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.667192 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qdfr4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.684139 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-dns/dns-default-gbw49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13045510-8717-4a71-ade4-be95a76440a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4100378bdad23dfbaf635cc71846262fc1e11f874ca8829d9325daa5394f31d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-gbw49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.702689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59748b9b-c309-4712-aa85-bb38d71c4915\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [conversion-webhook-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e8f29122aea315d5914a7a44fb2b651ebb1927330eedafd6e148dee989e5e6b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"conversion-webhook-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-conversion-webhook-595f9969b-l6z49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.716183 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b5d722a-1123-4935-9740-52a08d018bc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-2vhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.736082 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c48d0ab22815dfdb3e171ef3df637ba22947bd5d2ec5154fb7dfc4041c600f78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-67cbf64bc9-mtx25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.781503 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5947f21-291a-48d6-85be-6bc67d8adcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9723e369c3916d110948c31ae90387a63e0bdda6978dcd36370f14f8c2bdb66c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c20e702f19e2093811d938ddce6e1a50d603c53841803ac28e2a5ba40b4c3a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://86aa61fc366fbe870f8ef002711315bbfc6a6249a105234cf4c1b64e886c1f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3971f82b444869fdbecbfd54ef7a319b608fe63eef0e09d3f7a65b652ffafc3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0fe4d7a40c00f41501df7b85d725dd40f6d69f317508f2954c37396e2971bbdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://726a40f376f9fcf054f6b44b2237a348465ce1c95fb6027cbed57d44311501e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c5f8a0c563c087ce00c2a4d1e42a0ae8e4322fd18b01c8da58d3b47b8b8e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5375febaac0dd91b58ec329a5668a28516737ace6ba3f474888f2d43328c9db3\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1dea7ec0d5b34708cd76ebcd8f05f02f5161dd1f3953b66b78d6d2c3e12e8b73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.809968 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"378552fd-5e53-4882-87ff-95f3d9198861\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-666f99b6f-vlbxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.833429 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1a8b70-3856-486f-9912-a2de1d57c3fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3568a265e2d6b463508c020695a05cfa21e4c4c2cdc88bffea08aa00add2ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:43Z\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-v65wr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.861241 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ad279b4-d9dc-42a8-a1c8-a002bd063482\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [extract-utilities extract-content]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2858d5039ccec571b6cd26627bcc15672b705846caefb817b9c8fdc52c91b2a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-utilities\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}},{\\\"image\\\":\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"extract-content\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:50:39Z\\\"}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-rmwfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.891271 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09143b32-bfcb-4682-a82f-e0bfa420e445\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:58Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:31Z\\\",\\\"message\\\":\\\"W0813 19:47:30.268314 1 cmd.go:245] Using insecure, self-signed certificates\\\\nI0813 19:47:30.269111 1 crypto.go:601] Generating new CA for check-endpoints-signer@1755114450 cert, and key in /tmp/serving-cert-3525766047/serving-signer.crt, /tmp/serving-cert-3525766047/serving-signer.key\\\\nI0813 19:47:31.013071 1 observer_polling.go:159] Starting file observer\\\\nW0813 19:47:31.019750 1 builder.go:267] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\nI0813 19:47:31.020207 1 builder.go:299] check-endpoints version 4.16.0-202406131906.p0.gd790493.assembly.stream.el9-d790493-d790493cfc43fd33450ca27633cbe37aa17427d2\\\\nI0813 19:47:31.021545 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3525766047/tls.crt::/tmp/serving-cert-3525766047/tls.key\\\\\\\"\\\\nF0813 19:47:31.390339 1 cmd.go:170] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:31Z is after 2025-06-26T12:46:59Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:50:13Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:04Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.913689 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df02f99a-b4f8-4711-aedf-964dcb4d3400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:44:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:49:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:43:55Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T19:47:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0813 19:47:20.625050 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0813 19:47:20.626387 1 observer_polling.go:159] Starting file observer\\\\nI0813 19:47:20.628211 1 builder.go:299] cluster-policy-controller version 4.16.0-202406131906.p0.geaea543.assembly.stream.el9-eaea543-eaea543f4c845a7b65705f12e162cc121bb12f88\\\\nI0813 19:47:20.629256 1 dynamic_serving_content.go:113] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0813 19:47:50.882294 1 cmd.go:170] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:47:49Z is after 2025-06-26T12:47:18Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:47:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":6,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:49:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:43:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T19:44:00Z\\\"}}}],\\\"startTime\\\":\\\"2025-08-13T19:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:11 crc kubenswrapper[4183]: I0813 19:57:11.934654 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T19:50:39Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-08-13T19:57:11Z is after 2024-12-26T00:46:02Z" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209296 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209321 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210348 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.209435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210842 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.210974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:12 crc kubenswrapper[4183]: E0813 19:57:12.211439 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.435671 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:12 crc kubenswrapper[4183]: I0813 19:57:12.435765 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209005 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209117 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209224 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209042 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209460 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209532 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209450 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209707 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209770 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209925 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.209951 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210053 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210053 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210080 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210203 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210293 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210368 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.209101 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210471 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210556 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.210758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.210936 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211057 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211258 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211338 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211364 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211441 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.211906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.211957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212126 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212128 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212262 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212450 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.212479 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.212991 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.213171 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213257 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.213811 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214022 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214537 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.214605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:13 crc kubenswrapper[4183]: E0813 19:57:13.215003 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.434236 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:13 crc kubenswrapper[4183]: I0813 19:57:13.434379 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.209629 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.209731 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209354 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210077 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209356 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.209378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210923 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:14 crc kubenswrapper[4183]: E0813 19:57:14.210934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.433529 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:14 crc kubenswrapper[4183]: I0813 19:57:14.433636 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208923 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.208876 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209022 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209035 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209151 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209182 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209298 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209341 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209347 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209656 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209675 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209704 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209892 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.209953 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.209896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210005 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210133 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210186 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210220 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210240 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210134 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210499 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210569 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210616 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210737 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.210897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.210925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211125 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211211 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211233 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211386 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.211724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.211886 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212092 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212267 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212750 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.212949 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.212503 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213401 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213639 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.213897 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.213959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.432269 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:15 crc kubenswrapper[4183]: I0813 19:57:15.432388 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:15 crc kubenswrapper[4183]: E0813 19:57:15.526372 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.208474 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.208972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.209251 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209427 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209489 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.209508 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.209884 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.210581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.211194 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.211386 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:16 crc kubenswrapper[4183]: E0813 19:57:16.211696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.434755 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.434974 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718121 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718647 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.718858 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.719002 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Aug 13 19:57:16 crc kubenswrapper[4183]: I0813 19:57:16.719105 4183 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T19:57:16Z","lastTransitionTime":"2025-08-13T19:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.099408 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.125052 4183 reflector.go:351] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208424 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.208462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208475 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208464 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.208649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208715 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208852 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208952 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.208957 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209024 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209244 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209291 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209318 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209351 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209393 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209447 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209521 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209655 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209851 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209893 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209944 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.209948 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.209983 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.210036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210086 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210117 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210216 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210353 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.210466 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210606 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.210678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211123 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211172 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211173 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211245 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211307 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211431 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211478 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211599 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.211660 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211732 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211863 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.211952 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.212058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212177 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212241 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212443 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212515 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.212819 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.213281 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.213437 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.213904 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:17 crc kubenswrapper[4183]: E0813 19:57:17.214153 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.432718 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:17 crc kubenswrapper[4183]: I0813 19:57:17.432908 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.209282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.209663 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.210034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.210613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.211024 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:18 crc kubenswrapper[4183]: E0813 19:57:18.211224 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.321547 4183 csr.go:261] certificate signing request csr-6mdrh is approved, waiting to be issued Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.338156 4183 csr.go:257] certificate signing request csr-6mdrh is issued Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.432251 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:18 crc kubenswrapper[4183]: I0813 19:57:18.432335 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.209688 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210189 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210282 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.209693 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210623 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.210670 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.211661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.211916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.211860 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212222 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212259 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212322 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212401 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212421 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212485 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212535 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212695 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212753 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212877 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212881 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.212984 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213030 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213134 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213219 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.212936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213290 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213337 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213387 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213418 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213508 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213573 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213585 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213620 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213671 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.213913 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.213994 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214071 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214481 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214600 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.214628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.214761 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215032 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215116 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215253 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215605 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215716 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.215992 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216160 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216325 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:19 crc kubenswrapper[4183]: E0813 19:57:19.216453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.340423 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-29 11:41:58.636711427 +0000 UTC Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.340502 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6207h44m39.296215398s for next certificate rotation Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.432000 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:19 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:19 crc kubenswrapper[4183]: I0813 19:57:19.432079 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.208455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.208853 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.209252 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209438 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.209501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.209721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.211693 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.212273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ovnkube-controller pod=ovnkube-node-44qcg_openshift-ovn-kubernetes(3e19f9e8-9a37-4ca8-9790-c219750ab482)\"" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.212492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.221584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.221890 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.222101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.222298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.341232 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-29 00:37:29.51445257 +0000 UTC Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.341283 4183 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6196h40m9.173174313s for next certificate rotation Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.435956 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:20 crc kubenswrapper[4183]: I0813 19:57:20.436048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:20 crc kubenswrapper[4183]: E0813 19:57:20.528200 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208581 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208582 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208294 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208332 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208355 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208379 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208380 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208396 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208392 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208413 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208423 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208432 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208445 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208468 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208529 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208528 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208536 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208568 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.208209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210300 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.210540 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.210950 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.211163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212140 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212734 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.212557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.213307 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213378 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213502 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213737 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.213936 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214103 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214207 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214363 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214464 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214586 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.214619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214756 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.214942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215200 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215402 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215889 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.215931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216273 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216432 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216573 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.216858 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.432040 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:21 crc kubenswrapper[4183]: I0813 19:57:21.432151 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:21 crc kubenswrapper[4183]: E0813 19:57:21.611965 4183 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": StorageError: invalid object, Code: 4, Key: /kubernetes.io/leases/kube-node-lease/crc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 705b8cea-b0fa-4d4c-9420-d8b3e9b05fb1, UID in object meta: " Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209404 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209562 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209563 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.209639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.209904 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.210017 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210109 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210181 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.210262 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:22 crc kubenswrapper[4183]: E0813 19:57:22.210425 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.433563 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:22 crc kubenswrapper[4183]: I0813 19:57:22.433664 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208546 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.208753 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.209517 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209733 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.209927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210243 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210348 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.210746 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211047 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211148 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211264 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211434 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211499 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211522 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211605 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211649 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.211820 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.211963 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212049 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212078 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.210399 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212195 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212225 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212336 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212421 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212533 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212552 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212588 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.212689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.212954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213009 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213009 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213272 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213085 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213453 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213533 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.213631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.213980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214139 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214186 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214288 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214393 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214474 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.214586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.214881 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215119 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215176 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.215309 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215389 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:23 crc kubenswrapper[4183]: E0813 19:57:23.215601 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.431727 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:23 crc kubenswrapper[4183]: I0813 19:57:23.431938 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208706 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208819 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208707 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.208739 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209214 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.209235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209357 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:24 crc kubenswrapper[4183]: E0813 19:57:24.209596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.431766 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:24 crc kubenswrapper[4183]: I0813 19:57:24.431938 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208689 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208720 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.208745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211160 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211264 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.211650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211703 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.211768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212166 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212222 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212376 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212444 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212534 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.212663 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212856 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.212935 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213287 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213648 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213701 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.213745 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.213890 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214036 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214154 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.214692 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214743 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214903 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215366 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214976 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215000 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215640 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215003 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215034 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215762 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.214965 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.215098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.215275 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216122 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.216329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.216405 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216496 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216582 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216754 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.216919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217020 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.217454 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.217497 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217596 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217471 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.217646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218089 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218266 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.218938 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.219277 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.433163 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:25 crc kubenswrapper[4183]: I0813 19:57:25.433272 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:25 crc kubenswrapper[4183]: E0813 19:57:25.530038 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208571 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208607 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208628 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209906 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208677 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.208765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.209250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:26 crc kubenswrapper[4183]: E0813 19:57:26.210370 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.434189 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:26 crc kubenswrapper[4183]: I0813 19:57:26.434368 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208990 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209036 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209098 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209214 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209248 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208937 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.208959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209361 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209357 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209400 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209467 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209481 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209550 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209550 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.209760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209861 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209769 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.209919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210052 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210181 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210234 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210312 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210316 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210345 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210406 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210415 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210443 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210545 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210591 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210642 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210822 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210847 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210872 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210921 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.210946 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.210984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211033 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211063 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211179 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211360 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211475 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.211502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211572 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211728 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.211968 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212529 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212604 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212614 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212670 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.212986 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213192 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213227 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213343 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213612 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213696 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:27 crc kubenswrapper[4183]: E0813 19:57:27.213925 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.433656 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:27 crc kubenswrapper[4183]: I0813 19:57:27.433849 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212108 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212235 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212254 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212326 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212191 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212664 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.212955 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.212919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:28 crc kubenswrapper[4183]: E0813 19:57:28.213130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.432307 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:28 crc kubenswrapper[4183]: I0813 19:57:28.432407 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209210 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.209442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209740 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.209901 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.209959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210088 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210099 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210168 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210213 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210309 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210361 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210455 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210594 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.210690 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.210934 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211069 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211127 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211165 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211283 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211293 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211285 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211334 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211397 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211493 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.211678 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211721 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211902 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.211945 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212090 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212289 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212430 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212586 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212647 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.212870 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212905 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.212951 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213017 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213167 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213331 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213336 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213364 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213365 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213538 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213574 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213681 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.213819 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213930 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.213996 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214037 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.214137 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214223 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214648 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214868 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.214915 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215002 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215146 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215217 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:29 crc kubenswrapper[4183]: E0813 19:57:29.215358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.433632 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:29 crc kubenswrapper[4183]: I0813 19:57:29.433961 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209590 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.209520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210030 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.210118 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210276 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.210362 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210569 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.210660 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.211015 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.211226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.431976 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:30 crc kubenswrapper[4183]: I0813 19:57:30.432089 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:30 crc kubenswrapper[4183]: E0813 19:57:30.531284 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.209521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.209619 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210033 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210131 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210328 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210442 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210676 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210759 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.210864 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210931 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.211190 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210624 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210640 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.210659 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211143 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.211993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212058 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212196 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212412 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212480 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212544 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212698 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212898 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.212993 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212099 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213191 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213384 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213527 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.213887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212136 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.212163 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.213975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214041 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214092 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214203 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214326 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214387 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214472 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214557 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214656 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.214724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.214999 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215067 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215154 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215204 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215299 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215462 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.215675 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215736 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215895 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.215974 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.216011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216238 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.216285 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.216417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.217064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:31 crc kubenswrapper[4183]: E0813 19:57:31.217258 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.432319 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:31 crc kubenswrapper[4183]: I0813 19:57:31.432470 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208899 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209054 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209128 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208898 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209297 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.208939 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209394 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.209409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209713 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209883 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:32 crc kubenswrapper[4183]: E0813 19:57:32.209965 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.431318 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:32 crc kubenswrapper[4183]: I0813 19:57:32.431441 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.209496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.209897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210141 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210211 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210256 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210413 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210488 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210635 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210699 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210862 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.210865 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.210998 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211079 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211138 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211192 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211333 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211339 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211411 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211498 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211584 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211653 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.211752 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212039 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212081 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212055 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212183 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212212 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212219 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212245 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212295 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212304 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212323 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212353 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212373 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212398 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212400 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212417 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212435 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212473 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212546 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212567 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212600 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212630 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.212758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212880 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212230 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212909 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212927 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.212991 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213208 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213323 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213581 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.213978 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214102 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214560 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214710 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.214896 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215068 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215284 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215371 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215426 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215673 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215758 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215897 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:33 crc kubenswrapper[4183]: E0813 19:57:33.215984 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.217270 4183 scope.go:117] "RemoveContainer" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.433584 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.433982 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.688295 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovnkube-controller/5.log" Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.692328 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9"} Aug 13 19:57:33 crc kubenswrapper[4183]: I0813 19:57:33.692941 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.209412 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.209677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209744 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209687 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210096 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210151 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.209724 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.210268 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.210561 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.211064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.211375 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:34 crc kubenswrapper[4183]: E0813 19:57:34.211912 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.433020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:34 crc kubenswrapper[4183]: I0813 19:57:34.433150 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208635 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209057 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209157 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209182 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209087 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208885 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208896 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208916 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208882 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208987 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.209012 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.208684 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.212346 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212469 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212470 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212501 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212502 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212520 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212525 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212539 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.212592 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.213272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.213431 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.213487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214025 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214029 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214034 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214065 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214095 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214187 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214216 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214221 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214199 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214246 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214255 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214261 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214302 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214314 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.214319 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214429 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.214983 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215511 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215682 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215695 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.215855 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216083 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216201 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216298 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216428 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216501 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216544 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.216900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217494 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217954 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218031 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218070 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218142 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218210 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218226 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218279 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218417 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218567 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.218714 4183 scope.go:117] "RemoveContainer" containerID="2c46ff68a04a1082f93e69c285c61b083600d8bade481e7378a0c769ad40ab0f" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.218730 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219001 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219172 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219395 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219424 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219507 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.217738 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.219631 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.433259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:35 crc kubenswrapper[4183]: I0813 19:57:35.433551 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:35 crc kubenswrapper[4183]: E0813 19:57:35.533656 4183 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.208755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209055 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209071 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209147 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209358 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.209661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.209859 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210086 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.210274 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.210646 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.210903 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.211114 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:36 crc kubenswrapper[4183]: E0813 19:57:36.211356 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.437962 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.438098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.721268 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/4.log" Aug 13 19:57:36 crc kubenswrapper[4183]: I0813 19:57:36.721371 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"f7be0e9008401c6756f1bf4076bb89596e4b26b5733f27692dcb45eff8e4fa5e"} Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212437 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.212748 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212761 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.212959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213048 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213054 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213165 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213166 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213255 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213280 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213310 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213506 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213511 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213543 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213507 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213608 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213680 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213646 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.213711 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213758 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.213767 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214004 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214013 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214110 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214119 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214422 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214547 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214645 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.214708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.214900 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215007 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215178 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215244 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215384 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215557 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.215638 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.215729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216130 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216197 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216334 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216456 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216509 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216526 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216597 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216601 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216704 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.216902 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.216985 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217209 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217271 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217355 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217706 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.217891 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.217986 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218064 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218156 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.218193 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218329 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218522 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.218919 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.223959 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.224409 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.224651 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.225589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:37 crc kubenswrapper[4183]: E0813 19:57:37.225977 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.434638 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:37 crc kubenswrapper[4183]: I0813 19:57:37.435052 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208708 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208762 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.210319 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208980 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209062 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209114 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211589 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.209142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211693 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.210509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.208919 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:38 crc kubenswrapper[4183]: E0813 19:57:38.211931 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.433769 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:38 crc kubenswrapper[4183]: I0813 19:57:38.434416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.209046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.209324 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.209580 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.209697 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210453 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210618 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.210726 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.210999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211145 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.211329 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211447 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.211633 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.211942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212123 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212402 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212527 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212577 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212741 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.212747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212942 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.212982 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213074 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213090 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213215 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213349 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213488 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213566 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.213987 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.213994 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214027 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214106 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214113 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214159 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214195 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214205 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214241 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214250 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214350 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214301 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214577 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214887 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214910 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.214815 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.214959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215100 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215237 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215253 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215284 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215379 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215487 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215713 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.215851 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215861 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.215995 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216021 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216193 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216282 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216350 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216408 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.216439 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216491 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216554 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216613 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216701 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.216755 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.217158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Aug 13 19:57:39 crc kubenswrapper[4183]: E0813 19:57:39.217197 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.436366 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:39 crc kubenswrapper[4183]: I0813 19:57:39.437058 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.211265 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212059 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.212269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.212598 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.212961 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213051 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.213066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.213156 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213247 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.213714 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Aug 13 19:57:40 crc kubenswrapper[4183]: E0813 19:57:40.214076 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.266574 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.432590 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:40 crc kubenswrapper[4183]: I0813 19:57:40.432865 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.209050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.209745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210352 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210482 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210552 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210935 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211388 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.210370 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211084 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211463 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211490 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211690 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211747 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211883 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211912 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215865 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.211011 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215947 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.221505 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.222895 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.223083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.216010 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.228028 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.216046 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.229269 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.229562 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.215977 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.241601 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242056 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242067 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242113 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242336 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242411 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242501 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242535 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242550 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.241610 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242696 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242706 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242721 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.242902 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243032 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243183 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243256 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243304 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243353 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243418 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243625 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243639 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243650 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243691 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243759 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243762 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.244175 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.244274 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.246346 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.247889 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.248196 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.250203 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.243256 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.252738 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.256433 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.257146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258172 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258243 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258606 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258764 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.258966 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259106 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259175 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259199 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259241 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259245 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259285 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259362 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259422 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259435 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259478 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259494 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259526 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.259548 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.261272 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.261681 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.264082 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.269514 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.307591 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.308505 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.309621 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.309967 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310290 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310582 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310883 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311166 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311376 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311464 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311691 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311910 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312199 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312374 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.312658 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313111 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313469 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.311752 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.313112 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314268 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314444 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314669 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.315003 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314447 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.315365 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.310983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314133 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314064 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314550 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314611 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.316354 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.314289 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.317420 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.317867 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318034 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318037 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318165 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318298 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318346 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.318896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320540 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320732 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.321535 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322249 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322443 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.322640 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.323503 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.323947 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.320545 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.335763 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.373275 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.377125 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.377867 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.378103 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.380902 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.382316 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.380925 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.392298 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.761730 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.771421 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.772021 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.773384 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.773751 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.775921 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.778358 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782116 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782176 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782323 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782358 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782478 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782508 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782516 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782613 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782644 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782866 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.782919 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.783210 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.783263 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.787909 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:57:41 crc kubenswrapper[4183]: I0813 19:57:41.798160 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208297 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208409 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208414 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208459 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208476 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.208506 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.212364 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.213613 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.219195 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220254 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220488 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220649 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.220732 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221293 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221356 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.221537 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222323 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222449 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222589 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.222762 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.224049 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.224403 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225661 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.225962 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.226365 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.233581 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.253567 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.275679 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.304066 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.314169 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.434430 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:42 crc kubenswrapper[4183]: I0813 19:57:42.434547 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:43 crc kubenswrapper[4183]: I0813 19:57:43.432432 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:43 crc kubenswrapper[4183]: I0813 19:57:43.432531 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:44 crc kubenswrapper[4183]: I0813 19:57:44.432188 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:44 crc kubenswrapper[4183]: I0813 19:57:44.432304 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:45 crc kubenswrapper[4183]: I0813 19:57:45.432995 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:45 crc kubenswrapper[4183]: I0813 19:57:45.433130 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:46 crc kubenswrapper[4183]: I0813 19:57:46.433813 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:46 crc kubenswrapper[4183]: I0813 19:57:46.433992 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.353241 4183 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeReady" Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.433148 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:47 crc kubenswrapper[4183]: I0813 19:57:47.433633 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.197613 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.197747 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" podNamespace="openshift-marketplace" podName="community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.199300 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.259669 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.260237 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.260552 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363416 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363500 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.363691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.364212 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.364231 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.424550 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.424707 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" podNamespace="openshift-marketplace" podName="redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.425866 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.428554 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.428689 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" podNamespace="openshift-marketplace" podName="certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.429911 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.432870 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.433017 4183 topology_manager.go:215] "Topology Admit Handler" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" podNamespace="openshift-image-registry" podName="image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.433729 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.436674 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.437013 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.437705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.436687 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.441216 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.444276 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.451169 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.451289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.493579 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.720542 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.723559 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.737102 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.756056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"community-operators-k9qqb\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.816858 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.981515 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982108 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982213 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982516 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982633 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.982895 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.983994 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984246 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984410 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984449 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984701 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.984987 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.985149 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.985556 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.986030 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:48 crc kubenswrapper[4183]: I0813 19:57:48.986310 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.087352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.087993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088206 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088407 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088469 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088913 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.088951 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089332 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089423 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.089987 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090317 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090496 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090872 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.090979 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.091057 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.091318 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.092134 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.092477 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.095720 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.097461 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.104405 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.336484 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.342516 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.362020 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"collect-profiles-29251905-zmjv9\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.368744 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.378023 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.382516 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"redhat-operators-dcqzh\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.388390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"certified-operators-g4v97\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.434101 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.434603 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.646975 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 19:57:49 crc kubenswrapper[4183]: I0813 19:57:49.656723 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.103073 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.163072 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.438628 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.439249 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.806934 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350"} Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.808905 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerStarted","Data":"a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c"} Aug 13 19:57:50 crc kubenswrapper[4183]: I0813 19:57:50.808974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerStarted","Data":"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.138891 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.159169 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 19:57:51 crc kubenswrapper[4183]: W0813 19:57:51.164371 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb917686_edfb_4158_86ad_6fce0abec64c.slice/crio-2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761 WatchSource:0}: Error finding container 2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761: Status 404 returned error can't find the container with id 2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.433543 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.433646 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828714 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828863 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.828914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831070 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831131 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.831166 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.834334 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" exitCode=0 Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.834419 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101"} Aug 13 19:57:51 crc kubenswrapper[4183]: I0813 19:57:51.837609 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040207 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040326 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040670 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.040988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:57:52 crc kubenswrapper[4183]: I0813 19:57:52.432494 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:52 crc kubenswrapper[4183]: I0813 19:57:52.432613 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.846579 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.947723 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948212 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948646 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.948878 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.953627 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.953856 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.954051 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:57:52 crc kubenswrapper[4183]: E0813 19:57:52.954225 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.095396 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" podStartSLOduration=475.095328315 podStartE2EDuration="7m55.095328315s" podCreationTimestamp="2025-08-13 19:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:57:52.022866401 +0000 UTC m=+838.715531419" watchObservedRunningTime="2025-08-13 19:57:53.095328315 +0000 UTC m=+839.787992933" Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.432381 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:53 crc kubenswrapper[4183]: I0813 19:57:53.432503 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.433767 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.433956 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678312 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678447 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678541 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678575 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:57:54 crc kubenswrapper[4183]: I0813 19:57:54.678636 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.435181 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.436485 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.859189 4183 generic.go:334] "Generic (PLEG): container finished" podID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerID="a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c" exitCode=0 Aug 13 19:57:55 crc kubenswrapper[4183]: I0813 19:57:55.859276 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerDied","Data":"a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c"} Aug 13 19:57:56 crc kubenswrapper[4183]: I0813 19:57:56.432581 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:56 crc kubenswrapper[4183]: I0813 19:57:56.433008 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.076399 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214729 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214952 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.214984 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") pod \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\" (UID: \"8500d7bd-50fb-4ca6-af41-b7a24cae43cd\") " Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.216641 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.223045 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.232093 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.240859 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl" (OuterVolumeSpecName: "kube-api-access-5nrgl") pod "8500d7bd-50fb-4ca6-af41-b7a24cae43cd" (UID: "8500d7bd-50fb-4ca6-af41-b7a24cae43cd"). InnerVolumeSpecName "kube-api-access-5nrgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.330182 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.330247 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5nrgl\" (UniqueName: \"kubernetes.io/projected/8500d7bd-50fb-4ca6-af41-b7a24cae43cd-kube-api-access-5nrgl\") on node \"crc\" DevicePath \"\"" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.433681 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.433851 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" event={"ID":"8500d7bd-50fb-4ca6-af41-b7a24cae43cd","Type":"ContainerDied","Data":"8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998"} Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868624 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb40cf57cd40846ea6dd7cdfaa7418bcec66df8537c43111850207e05e4b998" Aug 13 19:57:57 crc kubenswrapper[4183]: I0813 19:57:57.868702 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9" Aug 13 19:57:58 crc kubenswrapper[4183]: I0813 19:57:58.432042 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:58 crc kubenswrapper[4183]: I0813 19:57:58.432152 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:57:59 crc kubenswrapper[4183]: I0813 19:57:59.433562 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:57:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:57:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:57:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:57:59 crc kubenswrapper[4183]: I0813 19:57:59.433719 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:00 crc kubenswrapper[4183]: I0813 19:58:00.431964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:00 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:00 crc kubenswrapper[4183]: I0813 19:58:00.432051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:01 crc kubenswrapper[4183]: I0813 19:58:01.434217 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:01 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:01 crc kubenswrapper[4183]: I0813 19:58:01.434297 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:02 crc kubenswrapper[4183]: I0813 19:58:02.436078 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:02 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:02 crc kubenswrapper[4183]: I0813 19:58:02.436184 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:03 crc kubenswrapper[4183]: I0813 19:58:03.434049 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:03 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:03 crc kubenswrapper[4183]: I0813 19:58:03.434158 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:04 crc kubenswrapper[4183]: I0813 19:58:04.431247 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:04 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:04 crc kubenswrapper[4183]: I0813 19:58:04.433048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:05 crc kubenswrapper[4183]: I0813 19:58:05.433205 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:05 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:05 crc kubenswrapper[4183]: I0813 19:58:05.433339 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.337633 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.337723 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.338150 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:06 crc kubenswrapper[4183]: E0813 19:58:06.338265 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:06 crc kubenswrapper[4183]: I0813 19:58:06.435695 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:06 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:06 crc kubenswrapper[4183]: I0813 19:58:06.436073 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:07 crc kubenswrapper[4183]: I0813 19:58:07.434455 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:07 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:07 crc kubenswrapper[4183]: I0813 19:58:07.434626 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.318713 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320372 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320732 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.321019 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.320478 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324305 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324482 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:08 crc kubenswrapper[4183]: E0813 19:58:08.324587 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:08 crc kubenswrapper[4183]: I0813 19:58:08.434303 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:08 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:08 crc kubenswrapper[4183]: I0813 19:58:08.434446 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:09 crc kubenswrapper[4183]: I0813 19:58:09.438110 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:09 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:09 crc kubenswrapper[4183]: I0813 19:58:09.438240 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:10 crc kubenswrapper[4183]: I0813 19:58:10.432062 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:10 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:10 crc kubenswrapper[4183]: I0813 19:58:10.432208 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:11 crc kubenswrapper[4183]: I0813 19:58:11.433134 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:11 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:11 crc kubenswrapper[4183]: I0813 19:58:11.433293 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:12 crc kubenswrapper[4183]: I0813 19:58:12.433039 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:12 crc kubenswrapper[4183]: I0813 19:58:12.433197 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:13 crc kubenswrapper[4183]: I0813 19:58:13.432221 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:13 crc kubenswrapper[4183]: I0813 19:58:13.432940 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:14 crc kubenswrapper[4183]: I0813 19:58:14.432003 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:14 crc kubenswrapper[4183]: I0813 19:58:14.432115 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:15 crc kubenswrapper[4183]: I0813 19:58:15.434366 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:15 crc kubenswrapper[4183]: I0813 19:58:15.434536 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.433911 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:58:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:58:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:58:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.434117 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.434269 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.435901 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Aug 13 19:58:16 crc kubenswrapper[4183]: I0813 19:58:16.435988 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac" gracePeriod=3600 Aug 13 19:58:21 crc kubenswrapper[4183]: E0813 19:58:21.211747 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:22 crc kubenswrapper[4183]: E0813 19:58:22.211080 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:23 crc kubenswrapper[4183]: E0813 19:58:23.210866 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.354289 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.354912 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.355202 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:32 crc kubenswrapper[4183]: E0813 19:58:32.355269 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313227 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313316 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313602 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.313672 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.314935 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.314991 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.315100 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:58:34 crc kubenswrapper[4183]: E0813 19:58:34.315148 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:46 crc kubenswrapper[4183]: E0813 19:58:46.213435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:58:46 crc kubenswrapper[4183]: E0813 19:58:46.214118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:58:47 crc kubenswrapper[4183]: E0813 19:58:47.211121 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080127 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080216 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080259 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080316 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080425 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080465 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080567 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080612 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080824 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.080995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081186 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081251 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081397 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.081433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082076 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082112 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082150 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.082187 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.097046 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.098249 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.098579 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100112 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100300 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100595 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100720 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100903 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100963 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.100738 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101123 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101188 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101134 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101366 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101482 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101562 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101485 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.101434 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102433 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102486 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.102574 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.104960 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.106550 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.106853 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.109448 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.115525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.118523 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.120930 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.121983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125282 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125352 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.125507 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.126536 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.129603 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.132968 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133390 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133558 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133718 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.133918 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.134768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.135522 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.136703 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.137371 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.140741 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.141097 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.142731 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.184422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.184966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.185619 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.185953 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186467 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.186944 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.187109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.187445 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.193122 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.199636 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.201391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.202267 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.204150 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.204993 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.205435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.206269 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.210386 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.214730 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.216506 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.218405 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.220324 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.221533 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.224521 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.238013 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.238136 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.248146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290025 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290237 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290272 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290456 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290515 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290827 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290898 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290934 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.290974 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291006 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291053 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291088 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291121 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291160 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291291 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291330 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291392 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291427 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291457 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291516 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291614 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291670 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291935 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.291992 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292034 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292108 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292154 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292183 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292213 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292247 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292276 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292302 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292352 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292386 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292422 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292480 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292532 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292594 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292618 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292654 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292681 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292866 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292912 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292939 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.292982 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293008 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293031 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293058 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293094 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293162 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293232 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293271 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.293360 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.294860 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.299637 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.301252 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302211 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302283 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.302424 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.307601 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.308881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.309144 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.309362 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.310231 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.313753 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.314121 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.314221 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315010 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315064 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315190 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315508 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.315681 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.317130 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.318242 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.320458 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321129 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321459 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321901 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.325876 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.325991 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.326425 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.327555 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.328657 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331503 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.331983 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.332229 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.332639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.338987 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.339016 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.335887 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.339947 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.341726 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336054 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336159 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336223 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336368 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336530 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336626 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336744 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.336972 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337111 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337373 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337436 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337619 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337677 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.337693 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.348095 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.348521 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.355957 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.362342 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.363214 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.363612 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.400108 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.400259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.402007 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.367088 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.368167 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.368302 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.369592 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.370043 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.372150 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.372521 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.373311 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.409707 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321217 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.390198 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.384106 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395368 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395563 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.321494 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395669 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.393617 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.396453 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.396729 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.416231 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397048 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397222 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397500 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.397694 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.398396 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.395664 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417493 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417534 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.417702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.418067 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.420976 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.422725 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.431009 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.439899 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.440300 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.440377 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.441131 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.442587 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.421919 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443403 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443710 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.443991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.444208 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.444393 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.447106 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: E0813 19:58:54.448506 4183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-08-13 20:00:56.448363925 +0000 UTC m=+1023.141028744 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-585546dd8b-v5m4t" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.450060 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.450766 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.451757 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.451936 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.452496 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454480 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454555 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454642 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454686 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454744 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454853 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454889 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454916 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454969 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.454993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455021 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455083 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455114 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455146 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455178 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455232 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455264 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455315 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455339 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455384 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455411 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455444 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455487 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455533 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455570 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455597 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455624 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455657 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455682 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455871 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455899 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.455975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457042 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457406 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457463 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.457575 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.464222 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.465881 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.466387 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.471110 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.471856 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.472186 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.472991 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.475297 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476227 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476432 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.476713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.488593 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.489082 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.493037 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.493886 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"image-registry-585546dd8b-v5m4t\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.495258 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.497182 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.497293 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.503497 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.510602 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512317 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.512928 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513074 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513259 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513276 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513425 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513479 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"redhat-marketplace-rmwfn\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513585 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.513994 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514134 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514230 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514270 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514464 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514484 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514690 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.514954 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.515130 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.516452 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.521692 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522016 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522394 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.522642 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.523288 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.523771 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.524764 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.525530 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.526728 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.527908 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"route-controller-manager-5c4dbb8899-tchz5\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.529986 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.530150 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.530339 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.531438 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.532171 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.532502 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.533421 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"service-ca-666f99b6f-vlbxv\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535007 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535185 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.535903 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.537752 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.538292 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.539487 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.539883 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540175 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540439 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.540740 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.542768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.542907 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.545213 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.557110 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.558604 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.564140 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.564514 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.568286 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.572614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"apiserver-67cbf64bc9-mtx25\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.579070 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.588214 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.588667 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.597455 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.602158 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.607672 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.608537 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.621518 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.623956 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.635440 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.647748 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.652661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.668527 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.670606 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.670688 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.672019 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681257 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681384 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681426 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681481 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.681503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.686996 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.687358 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.687616 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.698272 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.702768 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.706755 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.713401 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.717365 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.724723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.724718 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.725372 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.744518 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.745493 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.760719 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.763596 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.764477 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.775288 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.778455 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.794056 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"oauth-openshift-765b47f944-n2lhl\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.795378 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.797673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.799550 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.804231 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.804981 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.826227 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.828321 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.838267 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.839614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.839765 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.854303 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.863165 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"controller-manager-6ff78978b4-q4vv8\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.869181 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.870553 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.881145 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.886198 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.890507 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"console-84fccc7b6-mkncc\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.892445 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.904768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.908429 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.917146 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.935259 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.935682 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936047 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936096 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936354 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.936461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.948120 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:58:54 crc kubenswrapper[4183]: I0813 19:58:54.972746 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017116 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017340 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.017900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.203144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Aug 13 19:58:55 crc kubenswrapper[4183]: I0813 19:58:55.203212 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 19:58:56 crc kubenswrapper[4183]: I0813 19:58:56.183104 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"a3a061a59b867b60a3e6a1a13d08ce968a7bfbe260f6cd0b17972429364f2dff"} Aug 13 19:58:56 crc kubenswrapper[4183]: I0813 19:58:56.198351 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"cb33d2fb758e44ea5d6c5308cf6a0c2e4f669470cf12ebbac204a7dbd9719cdb"} Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.443884 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd556935_a077_45df_ba3f_d42c39326ccd.slice/crio-3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219 WatchSource:0}: Error finding container 3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219: Status 404 returned error can't find the container with id 3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.457129 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda702c6d2_4dde_4077_ab8c_0f8df804bf7a.slice/crio-2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5 WatchSource:0}: Error finding container 2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5: Status 404 returned error can't find the container with id 2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.870876 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63eb7413_02c3_4d6e_bb48_e5ffe5ce15be.slice/crio-51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724 WatchSource:0}: Error finding container 51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724: Status 404 returned error can't find the container with id 51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724 Aug 13 19:58:56 crc kubenswrapper[4183]: W0813 19:58:56.887154 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4dca86_e6ee_4ec9_8324_86aff960225e.slice/crio-042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933 WatchSource:0}: Error finding container 042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933: Status 404 returned error can't find the container with id 042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933 Aug 13 19:58:57 crc kubenswrapper[4183]: W0813 19:58:57.173735 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4092a9f8_5acc_4932_9e90_ef962eeb301a.slice/crio-40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748 WatchSource:0}: Error finding container 40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748: Status 404 returned error can't find the container with id 40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748 Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.210363 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"caf64d49987c99e4ea9efe593e0798b0aa755d8fdf7441c0156e1863763a7aa0"} Aug 13 19:58:57 crc kubenswrapper[4183]: W0813 19:58:57.222952 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1a8966_f594_490a_9fbb_eec5bafd13d3.slice/crio-44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2 WatchSource:0}: Error finding container 44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2: Status 404 returned error can't find the container with id 44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2 Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268604 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"51987a02e71ec4003b940a6bd7b8959747a906e94602c62bbc671c8b26623724"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"2e8f0bacebafcab5bbf3b42b7e4297638b1e6acfcc74bfc10076897a7be4d368"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268703 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"2680ced3658686e640e351a3342c799f7707f03bca3c8f776b22a7e838d68fd5"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.268728 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"1f2d8ae3277a5b2f175e31e08d91633d08f596d9399c619715c2f8b9fe7a9cf2"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.337665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.719372 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5"} Aug 13 19:58:57 crc kubenswrapper[4183]: I0813 19:58:57.741147 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"9ed66fef0dec7ca57bc8a1a3ccbadd74658c15ad523b6b56b58becdb98c703e8"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.206658 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"3a1adfc54f586eb717d23524f11a70a1c368ae7c720306a0e33e3393d7584219"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.236049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"40aef0eb1bbaaf5556252dcc2b75e214706ba3a0320e40aaa6997926ec4cf748"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.358206 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"861ac63b0e0c6ab1fc9beb841998e0e5dd2860ed632f8f364e94f575b406c884"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.361406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"76a23bcc5261ffef3e87aed770d502891d5cf930ce8f5608091c10c4c2f8355e"} Aug 13 19:58:58 crc kubenswrapper[4183]: I0813 19:58:58.432246 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"042b00f269188506965ca0b8217a4771ff1a78f7f3244b92c9aa64e154290933"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.500593 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerStarted","Data":"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.506781 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerStarted","Data":"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.512361 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"88c60b5e25b2ce016efe1942b67b182d4d9c87cf3eb10c9dc1468dc3abce4e98"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.527693 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"44f5ef3518ac6b9316c8964c76fdb446b6ab5fa88b9a56316e56f0b8cd21e4d2"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.539266 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239"} Aug 13 19:58:59 crc kubenswrapper[4183]: I0813 19:58:59.545732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"20a42c53825c9180dbf4c0a948617094d91e080fc40247547ca99c537257a821"} Aug 13 19:58:59 crc kubenswrapper[4183]: E0813 19:58:59.842138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:58:59 crc kubenswrapper[4183]: E0813 19:58:59.842286 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.718280 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"2c45b735c45341a1d77370cd8823760353056c6e1eff59259f19fde659c543fb"} Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.740672 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"ffa2ba8d5c39d98cd54f79874d44a75e8535b740b4e7b22d06c01c67e926ca36"} Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.755194 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b5c38ff_1fa8_4219_994d_15776acd4a4d.slice/crio-2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892 WatchSource:0}: Error finding container 2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892: Status 404 returned error can't find the container with id 2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892 Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.761219 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13ad7555_5f28_4555_a563_892713a8433a.slice/crio-8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141 WatchSource:0}: Error finding container 8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141: Status 404 returned error can't find the container with id 8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141 Aug 13 19:59:00 crc kubenswrapper[4183]: I0813 19:59:00.877647 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436"} Aug 13 19:59:00 crc kubenswrapper[4183]: W0813 19:59:00.927578 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13045510_8717_4a71_ade4_be95a76440a7.slice/crio-63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc WatchSource:0}: Error finding container 63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc: Status 404 returned error can't find the container with id 63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc Aug 13 19:59:01 crc kubenswrapper[4183]: W0813 19:59:01.027943 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d67253e_2acd_4bc1_8185_793587da4f17.slice/crio-282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722 WatchSource:0}: Error finding container 282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722: Status 404 returned error can't find the container with id 282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722 Aug 13 19:59:01 crc kubenswrapper[4183]: E0813 19:59:01.219981 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.429123 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"63f14f64c728127421ed63e84871dff5b193c951f7847a6c42411c5c4d4deedc"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.542201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"2cacd5e0efb1ce8b67d9c8c51dfbe105553c3a82ee16c3fc685a1e74f7194892"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.635732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.804379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerStarted","Data":"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141"} Aug 13 19:59:02 crc kubenswrapper[4183]: I0813 19:59:02.933327 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.191186 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"a7b73c0ecb48e250899c582dd00bb24b7714077ab1f62727343c931aaa84b579"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.265525 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"3137e2c39453dcdeff67eb557e1f28db273455a3b55a18b79757d9f183fde4e9"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.268364 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.284147 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.284445 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.299428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"2a3de049472dc73b116b7c97ddeb21440fd8f50594e5e9dd726a1c1cfe0bf588"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.300463 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.302653 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.302736 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.307569 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"96c6df9a2045ea9da57200221317b32730a7efb228b812d5bc7a5eef696963f6"} Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.528566 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.529978 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.528729 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.530099 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.538973 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.539071 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.541196 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.541284 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:04 crc kubenswrapper[4183]: I0813 19:59:04.818165 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 19:59:05 crc kubenswrapper[4183]: W0813 19:59:05.099673 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ad279b4_d9dc_42a8_a1c8_a002bd063482.slice/crio-9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7 WatchSource:0}: Error finding container 9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7: Status 404 returned error can't find the container with id 9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.361704 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"7c70e17033c682195efbddb8b127b02b239fc67e597936ebf8283a79edea04e3"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.428931 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"5aa1911bfbbdddf05ac698792baebff15593339de601d73adeab5547c57d456a"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.442340 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"b27ef0e5311849c50317136877d704c05729518c9dcec03ecef2bf1dc575fbe7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.452974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.469059 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerStarted","Data":"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.700655 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerStarted","Data":"47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.737738 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerStarted","Data":"a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.748648 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"906e45421a720cb9e49c934ec2f44b74221d2f79757d98a1581d6bf6a1fc5f31"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.755641 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"10cfef5f94c814cc8355e17d7fdcccd543ac26c393e3a7c8452af1219913ea3a"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.780538 4183 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" exitCode=0 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.782330 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808228 4183 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac" exitCode=0 Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac"} Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.808772 4183 scope.go:117] "RemoveContainer" containerID="4710ef779fd86c7f05070a5dee732122e43dc1edc22d8a8a4fd8e793b08a2c02" Aug 13 19:59:05 crc kubenswrapper[4183]: I0813 19:59:05.862679 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"11a119fa806fd94f2b3718680e62c440fc53a5fd0df6934b156abf3171c59e5b"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.002575 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"526dc34c7f0224642660d74a0d2dc6ff8a8ffcb683f16dcb88b66dd5d2363e0a"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.137683 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-8-crc"] Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220277 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220400 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220580 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:06 crc kubenswrapper[4183]: E0813 19:59:06.220642 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.221163 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"ae65970c89fa0f40e01774098114a6c64c75a67483be88aef59477e78bbb3f33"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.516774 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.546937 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"4146ac88f77df20ec1239010fef77264fc27e17e8819d70b5707697a50193ca3"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.553253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"aab926f26907ff6a0818e2560ab90daa29fc5dd04e9bc7ca22bafece60120f4d"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.625622 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"078835e6e37f63907310c41b225ef71d7be13426f87f8b32c57e6b2e8c13a5a8"} Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.626522 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.626623 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.649644 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:06 crc kubenswrapper[4183]: I0813 19:59:06.649752 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:06.994479 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9127708_ccfd_4891_8a3a_f0cacb77e0f4.slice/crio-0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238 WatchSource:0}: Error finding container 0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238: Status 404 returned error can't find the container with id 0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238 Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:07.069131 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae0dfbb_a0a9_45bb_85b5_cd9f94f64fe7.slice/crio-717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5 WatchSource:0}: Error finding container 717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5: Status 404 returned error can't find the container with id 717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5 Aug 13 19:59:07 crc kubenswrapper[4183]: W0813 19:59:07.241660 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d51f445_054a_4e4f_a67b_a828f5a32511.slice/crio-22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed WatchSource:0}: Error finding container 22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed: Status 404 returned error can't find the container with id 22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.687314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.708549 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"961449f5e5e8534f4a0d9f39c1853d25bd56415cac128d936d114b63d80904dc"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.778736 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.789641 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"22d48c9fe60d97ed13552f5aeeaa6d1d74f506bd913cdde4ceede42e8c963eed"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.867302 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerStarted","Data":"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.914018 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687"} Aug 13 19:59:07 crc kubenswrapper[4183]: I0813 19:59:07.984149 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.082174 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"a10fd87b4b9fef36cf95839340b0ecf97070241659beb7fea58a63794a40a007"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.130544 4183 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" exitCode=0 Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.130656 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.206688 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerStarted","Data":"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.259460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.313212 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"97418fd7ce5644b997f128bada5bb6c90d375c9d7626fb1d5981b09a8d6771d7"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.326680 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"717e351e369b4a5799931814fac4e486642f405706a608624e022a6e952b8ef5"} Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.399579 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.399704 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.400079 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:08 crc kubenswrapper[4183]: E0813 19:59:08.400136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.467595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"ce1a5d3596103f2604e3421cb68ffd62e530298f3c2a7b8074896c2e7152c621"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.612595 4183 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" exitCode=0 Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.613514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.716179 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157"} Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.718077 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.729190 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.729275 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:08 crc kubenswrapper[4183]: I0813 19:59:08.934742 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"0e119602de1750a507b4e3fbbc37af9db215cdfe171f58b23acd54302144e238"} Aug 13 19:59:09 crc kubenswrapper[4183]: E0813 19:59:09.290158 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.018352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.081748 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.135170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerStarted","Data":"aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.167201 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"24d2c9dad5c7f6fd94e47dca912545c4f5b5cbadb90c11ba477fb1b512f0e277"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.192024 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"459e80350bae6577b517dba7ef99686836a51fad11f6f4125003b262f73ebf17"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.224534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"d6d93047e42b7c37ac294d852c1865b360a39c098b65b453bf43202316d1ee5f"} Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.225748 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:10 crc kubenswrapper[4183]: I0813 19:59:10.225873 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.278220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.318271 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"17f6677962bd95967c105804158d24c9aee9eb80515bdbdb6c49e51ae42b0a5c"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.318621 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.328253 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.328368 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.356477 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"8ef23ac527350f7127dc72ec6d1aba3bba5c4b14a730a4bd909a3fdfd399378c"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.411405 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"653c5a1f52832901395f8f14e559c992fce4ce38bc73620d39cf1567c2981bf9"} Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.418058 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.427601 4183 patch_prober.go:28] interesting pod/route-controller-manager-5c4dbb8899-tchz5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.427687 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.431216 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.441212 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Aug 13 19:59:11 crc kubenswrapper[4183]: I0813 19:59:11.441307 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.490618 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.491308 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.908493 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909163 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909333 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:11 crc kubenswrapper[4183]: E0813 19:59:11.909391 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.492982 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerStarted","Data":"0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1"} Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.521463 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.521576 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.522274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.675186 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:12 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.678052 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.741163 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"7a017f2026334b4ef3c2c72644e98cd26b3feafb1ad74386d1d7e4999fa9e9bb"} Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.893079 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:12 crc kubenswrapper[4183]: I0813 19:59:12.893258 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.457120 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:13 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.458286 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.555577 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.556327 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.557394 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n59fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k9qqb_openshift-marketplace(ccdf38cf-634a-41a2-9c8b-74bb86af80a7): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:13 crc kubenswrapper[4183]: E0813 19:59:13.557571 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.893152 4183 patch_prober.go:28] interesting pod/route-controller-manager-5c4dbb8899-tchz5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:13 crc kubenswrapper[4183]: I0813 19:59:13.893326 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:13.988691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerStarted","Data":"5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:13.990019 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.002280 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.002505 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.023732 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.061266 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.125384 4183 generic.go:334] "Generic (PLEG): container finished" podID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerID="c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03" exitCode=0 Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.125542 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerDied","Data":"c00af436eed79628e0e4901e79048ca0af8fcfc3099b202cf5fa799464c7fc03"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.265455 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.266575 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.269384 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.269458 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.409125 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.409241 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.440141 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:14 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.440285 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.528690 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.531286 4183 patch_prober.go:28] interesting pod/catalog-operator-857456c46-7f5wf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.532753 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.531345 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.533736 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.536046 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.540190 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.544531 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.696686 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.696924 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.712116 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.712236 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.901883 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902317 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902415 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.902445 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.920225 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.920358 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.951462 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.951540 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.955313 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:14 crc kubenswrapper[4183]: I0813 19:59:14.955461 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.027582 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.027930 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.295721 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.460713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:15 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.460930 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.553274 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.553471 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.554294 4183 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": context deadline exceeded" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.554327 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": context deadline exceeded" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.678220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.708219 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"21441aa058a7fc7abd5477d6c596271f085a956981f7a1240f7a277a497c7755"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.709051 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.840114 4183 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" exitCode=0 Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.841377 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963"} Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842433 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842496 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.842989 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.843050 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.850667 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:15 crc kubenswrapper[4183]: I0813 19:59:15.850753 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092412 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092516 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092636 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.092723 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.435723 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436359 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436499 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mwzcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4v97_openshift-marketplace(bb917686-edfb-4158-86ad-6fce0abec64c): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:16 crc kubenswrapper[4183]: E0813 19:59:16.436555 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.450177 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:16 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.450374 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.993579 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"55fde84744bf28e99782e189a6f37f50b90f68a3503eb7f58d9744fc329b3ad0"} Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.995511 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:16 crc kubenswrapper[4183]: I0813 19:59:16.995591 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:17 crc kubenswrapper[4183]: E0813 19:59:17.011104 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:17 crc kubenswrapper[4183]: I0813 19:59:17.450267 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:17 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:17 crc kubenswrapper[4183]: I0813 19:59:17.451048 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.013627 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.027744 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.036728 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"097e790a946b216a85d0fae9757cd924373f90ee6f60238beb63ed4aaad70a83"} Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.052644 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" exitCode=0 Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.053390 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3"} Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.221555 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.222256 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.222765 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:18 crc kubenswrapper[4183]: E0813 19:59:18.223280 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.455540 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:18 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:18 crc kubenswrapper[4183]: I0813 19:59:18.455705 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:19 crc kubenswrapper[4183]: I0813 19:59:19.132644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.179333 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"7affac532533ef0eeb1ab47860360791c20d3b170a8f0f2ff3a4172b7a3e0d60"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.179418 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:19.322218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.481340 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:19.481422 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.187629 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"c5e2f15a8db655a6a0bf0f0e7b58aa9539a6061f0ba62d00544e8ae2fda4799c"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.191395 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3" exitCode=0 Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.193318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerDied","Data":"b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3"} Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.431924 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.444106 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:20 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:20 crc kubenswrapper[4183]: I0813 19:59:20.444186 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578019 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578086 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578199 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:20 crc kubenswrapper[4183]: E0813 19:59:20.578250 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:21 crc kubenswrapper[4183]: I0813 19:59:21.439511 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:21 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:21 crc kubenswrapper[4183]: I0813 19:59:21.440174 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.321313 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"c58eafce8379a44387b88a8f240cc4db0f60e96be3a329c57feb7b3d30a9c1df"} Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.323541 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.333687 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.334196 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.395051 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38"} Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.444092 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:22 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:22 crc kubenswrapper[4183]: I0813 19:59:22.444232 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.383529 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.383975 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.384097 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:23 crc kubenswrapper[4183]: E0813 19:59:23.384157 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.446637 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:23 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.446729 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.541045 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"616a149529a4e62cb9a66b620ce134ef7451a62a02ea4564d08effb1afb8a8e3"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.543191 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gbw49" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.550606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gbw49" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.583318 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerStarted","Data":"b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.589691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239"} Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.594615 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.595949 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.596185 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.616582 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:23 crc kubenswrapper[4183]: I0813 19:59:23.616746 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.442155 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:24 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.442662 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.525297 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.526345 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.528019 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.529015 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.567026 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.621020 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"1cca846256bf85cbd7c7f47d78ffd3a017ed62ad697f87acb64600f492c2e556"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.628659 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.655400 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.656171 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.665497 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.665614 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.666135 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"882d38708fa83bc398808c0ce244f77c0ef2b6ab6f69e988b1f27aaea5d0229e"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.672329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"19ec4c1780cc88a3cfba567eee52fe5f2e6994b97cbb3947d1ab13f0c4146bf5"} Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.675828 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.676112 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.681676 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.682043 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.698210 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.807965 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.876653 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.876737 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.877108 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.877152 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.889020 4183 patch_prober.go:28] interesting pod/controller-manager-6ff78978b4-q4vv8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.889129 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.960069 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.961051 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987503 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987631 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987733 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:24 crc kubenswrapper[4183]: I0813 19:59:24.987653 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.020461 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.020575 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": dial tcp 10.217.0.30:6443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021135 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021177 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021239 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.021272 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.218175 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.373679 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374597 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374931 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nzb4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dcqzh_openshift-marketplace(6db26b71-4e04-4688-a0c0-00e06e8c888d): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:25 crc kubenswrapper[4183]: E0813 19:59:25.374982 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.434518 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:25 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.434683 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.688139 4183 generic.go:334] "Generic (PLEG): container finished" podID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerID="b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9" exitCode=0 Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.688651 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerDied","Data":"b84a7ab7f1820bc9c15f1779999dcf04a421b3a4ef043acf93ea2f14cdcff7d9"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.692565 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"98e6fc91236bf9c4dd7a99909033583c8b64e10f67e3130a12a92936c6a6a8dd"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.703346 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"f45aa787fb1c206638720c3ec1a09cb5a4462bb90c0d9e77276f385c9f24e9bc"} Aug 13 19:59:25 crc kubenswrapper[4183]: I0813 19:59:25.708073 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2"} Aug 13 19:59:26 crc kubenswrapper[4183]: I0813 19:59:26.453310 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:26 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:26 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:26 crc kubenswrapper[4183]: I0813 19:59:26.453464 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580144 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580278 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580401 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:26 crc kubenswrapper[4183]: E0813 19:59:26.580459 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.442359 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:27 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:27 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:27 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.442744 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.749963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" event={"ID":"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab","Type":"ContainerStarted","Data":"850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512"} Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.761394 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38" exitCode=0 Aug 13 19:59:27 crc kubenswrapper[4183]: I0813 19:59:27.761740 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"8d517c0fc52e9a1039f5e59cdbb937f13503c7a4c1c4b293a874285946b48f38"} Aug 13 19:59:28 crc kubenswrapper[4183]: E0813 19:59:28.212953 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.371432 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-8-crc" podStartSLOduration=35619914.37117759 podStartE2EDuration="9894h25m14.371177589s" podCreationTimestamp="2024-06-27 13:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:59:28.369383438 +0000 UTC m=+935.062048516" watchObservedRunningTime="2025-08-13 19:59:28.371177589 +0000 UTC m=+935.063842437" Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.441302 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:28 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:28 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:28 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:28 crc kubenswrapper[4183]: I0813 19:59:28.441393 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.432333 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:29 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:29 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:29 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.433101 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.843299 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.844565 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.846243 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Aug 13 19:59:29 crc kubenswrapper[4183]: I0813 19:59:29.846371 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Aug 13 19:59:30 crc kubenswrapper[4183]: I0813 19:59:30.435651 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:30 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:30 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:30 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:30 crc kubenswrapper[4183]: I0813 19:59:30.436305 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325467 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325538 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325757 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:31 crc kubenswrapper[4183]: E0813 19:59:31.325940 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.436887 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:31 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:31 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:31 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.436986 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:31 crc kubenswrapper[4183]: I0813 19:59:31.669384 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:32 crc kubenswrapper[4183]: I0813 19:59:32.437963 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:32 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:32 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:32 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:32 crc kubenswrapper[4183]: I0813 19:59:32.438645 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.160183 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.259101 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72854c1e-5ae2-4ed6-9e50-ff3bccde2635" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.259682 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") pod \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.260125 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") pod \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\" (UID: \"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\") " Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.260634 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.290011 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72854c1e-5ae2-4ed6-9e50-ff3bccde2635" (UID: "72854c1e-5ae2-4ed6-9e50-ff3bccde2635"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.362543 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72854c1e-5ae2-4ed6-9e50-ff3bccde2635-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.440531 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:33 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:33 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:33 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.440941 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831200 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" event={"ID":"72854c1e-5ae2-4ed6-9e50-ff3bccde2635","Type":"ContainerDied","Data":"d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877"} Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831293 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84dd6581e40beedee68c638bafabbf5843141ec2068acac7cb06e5af3360877" Aug 13 19:59:33 crc kubenswrapper[4183]: I0813 19:59:33.831374 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.211519 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.211927 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.343755 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344580 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344712 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:34 crc kubenswrapper[4183]: E0813 19:59:34.344764 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.433338 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:34 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:34 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:34 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.433458 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.841116 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.841658 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872051 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872110 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872615 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.872671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.873283 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875150 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875369 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" gracePeriod=2 Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875904 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.875965 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.949438 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.949705 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.985305 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.985402 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.986513 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 19:59:34 crc kubenswrapper[4183]: I0813 19:59:34.987203 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.019257 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.019362 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.020556 4183 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.020970 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.438605 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:35 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:35 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:35 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.438911 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.482606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.751490 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752102 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.751981 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752228 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752015 4183 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.752299 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.769313 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.858535 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.860310 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" exitCode=0 Aug 13 19:59:35 crc kubenswrapper[4183]: I0813 19:59:35.860468 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9"} Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.022392 4183 patch_prober.go:28] interesting pod/oauth-openshift-765b47f944-n2lhl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.30:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.022581 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.30:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.067663 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.432964 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:36 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:36 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:36 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:36 crc kubenswrapper[4183]: I0813 19:59:36.433261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:37 crc kubenswrapper[4183]: E0813 19:59:37.215374 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:37 crc kubenswrapper[4183]: I0813 19:59:37.447280 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:37 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:37 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:37 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:37 crc kubenswrapper[4183]: I0813 19:59:37.447479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:38 crc kubenswrapper[4183]: E0813 19:59:38.215975 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.435953 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:38 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:38 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:38 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.436590 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.932638 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"7342452c1232185e3cd70eb0d269743e495acdb67ac2358d63c1509e164b1377"} Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.939102 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} Aug 13 19:59:38 crc kubenswrapper[4183]: I0813 19:59:38.940161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:59:39 crc kubenswrapper[4183]: E0813 19:59:39.223292 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.443735 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:39 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:39 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:39 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.444275 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:39 crc kubenswrapper[4183]: I0813 19:59:39.961542 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"ff87aa3e7fe778204f9c122934ebd1afdd2fc3dff3e2c7942831852cb04c7fc6"} Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.115312 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.116977 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" containerID="cri-o://47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" gracePeriod=30 Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.447684 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:40 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:40 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:40 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.448063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.943630 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.951272 4183 topology_manager.go:215] "Topology Admit Handler" podUID="e4a7de23-6134-4044-902a-0900dc04a501" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-kk8kg" Aug 13 19:59:40 crc kubenswrapper[4183]: E0813 19:59:40.951892 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.951963 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: E0813 19:59:40.952055 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952067 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952223 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" containerName="pruner" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.952247 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" containerName="collect-profiles" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.953316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:40 crc kubenswrapper[4183]: I0813 19:59:40.968896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.040960 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073230 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073359 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.073391 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.090682 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178551 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178691 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.178721 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.180394 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.253571 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.355614 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.447413 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:41 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:41 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:41 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.447506 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:41 crc kubenswrapper[4183]: I0813 19:59:41.611295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.003196 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005033 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005239 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.005304 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:42 crc kubenswrapper[4183]: E0813 19:59:42.238198 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.450760 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:42 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:42 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:42 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.451196 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.662137 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.677438 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.664605 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:42 crc kubenswrapper[4183]: I0813 19:59:42.677534 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.016536 4183 generic.go:334] "Generic (PLEG): container finished" podID="378552fd-5e53-4882-87ff-95f3d9198861" containerID="47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" exitCode=0 Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.016921 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerDied","Data":"47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7"} Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.018079 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.018295 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.439731 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:43 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:43 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:43 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:43 crc kubenswrapper[4183]: I0813 19:59:43.440334 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:44 crc kubenswrapper[4183]: E0813 19:59:44.213760 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.441219 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:44 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:44 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:44 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.441340 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.594374 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.819339 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v54bt" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.871664 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.871873 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.872118 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.872210 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:44 crc kubenswrapper[4183]: E0813 19:59:44.874435 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.949683 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:44 crc kubenswrapper[4183]: I0813 19:59:44.950412 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.298527 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.310054 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.441733 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:45 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:45 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:45 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.442634 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.649936 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.650038 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.649945 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.650244 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.884165 4183 patch_prober.go:28] interesting pod/authentication-operator-7cc7ff75d5-g9qv8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.885001 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:45 crc kubenswrapper[4183]: I0813 19:59:45.948340 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 19:59:46 crc kubenswrapper[4183]: I0813 19:59:46.437716 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:46 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:46 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:46 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:46 crc kubenswrapper[4183]: I0813 19:59:46.438164 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.329990 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330495 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/certified-operator-index:v4.16" Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330660 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncrf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7287f_openshift-marketplace(887d596e-c519-4bfa-af90-3edd9e1b2f0f): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:47 crc kubenswrapper[4183]: E0813 19:59:47.330729 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.573828 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:47 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:47 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:47 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.573981 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:47 crc kubenswrapper[4183]: I0813 19:59:47.799589 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-kk8kg"] Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.080496 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"c5069234e6bbbde190e466fb11df01a395209a382d2942184c3f52c3865e00ee"} Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.334680 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.334954 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/community-operator-index:v4.16" Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.335577 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n6sqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8jhz6_openshift-marketplace(3f4dca86-e6ee-4ec9-8324-86aff960225e): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:48 crc kubenswrapper[4183]: E0813 19:59:48.335720 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.434752 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:48 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:48 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:48 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.435306 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.648599 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.649030 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650082 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650129 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.650161 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.651317 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.651352 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.652510 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Aug 13 19:59:48 crc kubenswrapper[4183]: I0813 19:59:48.652585 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" containerID="cri-o://f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" gracePeriod=30 Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.029359 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.029884 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.123308 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.123512 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.139181 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" event={"ID":"378552fd-5e53-4882-87ff-95f3d9198861","Type":"ContainerDied","Data":"fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039"} Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.139746 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf310c9137d2862f3313bbe4210058a1015f75db6cabbd845d64c247c4ee039" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.164685 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.194471 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.195109 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.195253 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") pod \"378552fd-5e53-4882-87ff-95f3d9198861\" (UID: \"378552fd-5e53-4882-87ff-95f3d9198861\") " Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.202571 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.208273 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf" (OuterVolumeSpecName: "kube-api-access-d7ntf") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "kube-api-access-d7ntf". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.220765 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key" (OuterVolumeSpecName: "signing-key") pod "378552fd-5e53-4882-87ff-95f3d9198861" (UID: "378552fd-5e53-4882-87ff-95f3d9198861"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.229296 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.229484 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297207 4183 reconciler_common.go:300] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/378552fd-5e53-4882-87ff-95f3d9198861-signing-cabundle\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297611 4183 reconciler_common.go:300] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/378552fd-5e53-4882-87ff-95f3d9198861-signing-key\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.297734 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d7ntf\" (UniqueName: \"kubernetes.io/projected/378552fd-5e53-4882-87ff-95f3d9198861-kube-api-access-d7ntf\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360235 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360331 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-operator-index:v4.16" Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360594 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ptdrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f4jkp_openshift-marketplace(4092a9f8-5acc-4932-9e90-ef962eeb301a): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:49 crc kubenswrapper[4183]: E0813 19:59:49.360647 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.443457 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.444219 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.879979 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:49 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:49 crc kubenswrapper[4183]: I0813 19:59:49.880081 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.177107 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-vlbxv" Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.177878 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"34cf17f4d863a4ac8e304ee5c662018d813019d268cbb7022afa9bdac6b80fbd"} Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.441573 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:50 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:50 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:50 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:50 crc kubenswrapper[4183]: I0813 19:59:50.443668 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:51 crc kubenswrapper[4183]: E0813 19:59:51.212575 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.440975 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:51 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:51 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:51 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.441203 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:51 crc kubenswrapper[4183]: E0813 19:59:51.468060 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.987666 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:51 crc kubenswrapper[4183]: I0813 19:59:51.988080 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" containerID="cri-o://5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" gracePeriod=30 Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.198111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"5ca33b1d9111046b71500c2532324037d0682ac3c0fabe705b5bd17f91afa552"} Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.198164 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.409457 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.422430 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.427195 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" containerID="cri-o://aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" gracePeriod=30 Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.437009 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:52 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:52 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:52 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.437154 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.486875 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-service-ca/service-ca-666f99b6f-vlbxv"] Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.649433 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.649971 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:52 crc kubenswrapper[4183]: I0813 19:59:52.845735 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podStartSLOduration=12.845670263 podStartE2EDuration="12.845670263s" podCreationTimestamp="2025-08-13 19:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 19:59:52.781564366 +0000 UTC m=+959.474229104" watchObservedRunningTime="2025-08-13 19:59:52.845670263 +0000 UTC m=+959.538335011" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.219976 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378552fd-5e53-4882-87ff-95f3d9198861" path="/var/lib/kubelet/pods/378552fd-5e53-4882-87ff-95f3d9198861/volumes" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.223157 4183 generic.go:334] "Generic (PLEG): container finished" podID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerID="5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" exitCode=0 Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.223289 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerDied","Data":"5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df"} Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.228417 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerID="aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" exitCode=0 Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.228543 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerDied","Data":"aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59"} Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.437134 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:53 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:53 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:53 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.437248 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.854176 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.920999 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921104 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921134 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921170 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.921195 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") pod \"87df87f4-ba66-4137-8e41-1fa632ad4207\" (UID: \"87df87f4-ba66-4137-8e41-1fa632ad4207\") " Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.922384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.922508 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca" (OuterVolumeSpecName: "client-ca") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.923655 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config" (OuterVolumeSpecName: "config") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.969111 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57" (OuterVolumeSpecName: "kube-api-access-pzb57") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "kube-api-access-pzb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:53 crc kubenswrapper[4183]: I0813 19:59:53.969275 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87df87f4-ba66-4137-8e41-1fa632ad4207" (UID: "87df87f4-ba66-4137-8e41-1fa632ad4207"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023502 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pzb57\" (UniqueName: \"kubernetes.io/projected/87df87f4-ba66-4137-8e41-1fa632ad4207-kube-api-access-pzb57\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023541 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87df87f4-ba66-4137-8e41-1fa632ad4207-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023554 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-config\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023573 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.023585 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87df87f4-ba66-4137-8e41-1fa632ad4207-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238042 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" event={"ID":"87df87f4-ba66-4137-8e41-1fa632ad4207","Type":"ContainerDied","Data":"4916f2a17d27bbf013c1e13f025d2cdf51127409f1a28c8a620b14bc4225ba0f"} Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238184 4183 scope.go:117] "RemoveContainer" containerID="5a16f80522246f66629d4cfcf1e317f7a3db9cc08045c713b73797a46e8882df" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.238294 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff78978b4-q4vv8" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.436856 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.437289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.642583 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694196 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694343 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694387 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694444 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.694472 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.698711 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.709297 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6ff78978b4-q4vv8"] Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.718283 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844327 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844401 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844479 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.844546 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") pod \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\" (UID: \"af6b67a3-a2bd-4051-9adc-c208a5a65d79\") " Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.846529 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca" (OuterVolumeSpecName: "client-ca") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.847339 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config" (OuterVolumeSpecName: "config") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.861274 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn" (OuterVolumeSpecName: "kube-api-access-hpzhn") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "kube-api-access-hpzhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.869651 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "af6b67a3-a2bd-4051-9adc-c208a5a65d79" (UID: "af6b67a3-a2bd-4051-9adc-c208a5a65d79"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.871983 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.872086 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.876100 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.876212 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.896218 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-mtx25 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]log ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]etcd ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 19:59:54 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 19:59:54 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 19:59:54 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.896445 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947258 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af6b67a3-a2bd-4051-9adc-c208a5a65d79-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947475 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947494 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hpzhn\" (UniqueName: \"kubernetes.io/projected/af6b67a3-a2bd-4051-9adc-c208a5a65d79-kube-api-access-hpzhn\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.947512 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6b67a3-a2bd-4051-9adc-c208a5a65d79-config\") on node \"crc\" DevicePath \"\"" Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.953125 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 19:59:54 crc kubenswrapper[4183]: I0813 19:59:54.953213 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.267619 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.291160 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" path="/var/lib/kubelet/pods/87df87f4-ba66-4137-8e41-1fa632ad4207/volumes" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.294870 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" event={"ID":"af6b67a3-a2bd-4051-9adc-c208a5a65d79","Type":"ContainerDied","Data":"893b4f9b5ed27072046f833f87a3b5c0ae52bb015f77a4268cf775d1c39b6dcf"} Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.294955 4183 scope.go:117] "RemoveContainer" containerID="aa3bd53db5b871b1e7ccc9029bf14c3e8c4163839c67447dd344680fd1080e59" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331335 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331506 4183 topology_manager.go:215] "Topology Admit Handler" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" podNamespace="openshift-controller-manager" podName="controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331700 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331717 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331736 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331745 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.331763 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331814 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331971 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" containerName="route-controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.331991 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="378552fd-5e53-4882-87ff-95f3d9198861" containerName="service-ca-controller" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.332008 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="87df87f4-ba66-4137-8e41-1fa632ad4207" containerName="controller-manager" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.332662 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347326 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347460 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347597 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tf29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8s8pc_openshift-marketplace(c782cf62-a827-4677-b3c2-6f82c5f09cbb): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 19:59:55 crc kubenswrapper[4183]: E0813 19:59:55.347655 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367304 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367481 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367520 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.367684 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.445246 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:55 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:55 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:55 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.445358 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.468643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.468993 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469037 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469071 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.469106 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.648929 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.649094 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.692567 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.694217 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.696064 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.700916 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.701464 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.711293 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.711751 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:55 crc kubenswrapper[4183]: I0813 19:59:55.791361 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.012351 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.152557 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.166000 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.177683 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.435947 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:56 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:56 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:56 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.436149 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.471761 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.471976 4183 topology_manager.go:215] "Topology Admit Handler" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.475959 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612404 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.612630 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.613039 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.679475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714217 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714435 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.714613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.847427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.847823 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.848006 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.857636 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:56 crc kubenswrapper[4183]: I0813 19:59:56.923763 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"controller-manager-c4dd57946-mpxjt\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.000386 4183 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.020516 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.042895 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.052159 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.059066 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.070227 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.115680 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"route-controller-manager-5b77f9fd48-hb8xt\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.165521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.173370 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.209604 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 19:59:57 crc kubenswrapper[4183]: E0813 19:59:57.219465 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.437713 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:57 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:57 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:57 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.437919 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:57 crc kubenswrapper[4183]: I0813 19:59:57.929510 4183 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-08-13T19:59:57.000640771Z","Handler":null,"Name":""} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.085657 4183 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.085937 4183 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.115602 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": read tcp 10.217.0.2:40914->10.217.0.23:8443: read: connection reset by peer" start-of-body= Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.115726 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": read tcp 10.217.0.2:40914->10.217.0.23:8443: read: connection reset by peer" Aug 13 19:59:58 crc kubenswrapper[4183]: E0813 19:59:58.213433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.357685 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"42d711544e11c05fc086e8f0c7a21cc883bc678e9e7c9221d490bdabc9cffe87"} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360293 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360735 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" exitCode=255 Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.360869 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.442113 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:58 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:58 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:58 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:58 crc kubenswrapper[4183]: I0813 19:59:58.442250 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:59 crc kubenswrapper[4183]: E0813 19:59:59.236509 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.435876 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 19:59:59 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 19:59:59 crc kubenswrapper[4183]: [+]process-running ok Aug 13 19:59:59 crc kubenswrapper[4183]: healthz check failed Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.436152 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.866426 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 19:59:59 crc kubenswrapper[4183]: I0813 19:59:59.909397 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.027588 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:00 crc kubenswrapper[4183]: W0813 20:00:00.070724 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16f68e98_a8f9_417a_b92b_37bfd7b11e01.slice/crio-4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54 WatchSource:0}: Error finding container 4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54: Status 404 returned error can't find the container with id 4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54 Aug 13 20:00:00 crc kubenswrapper[4183]: E0813 20:00:00.219221 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.430252 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.430382 4183 topology_manager.go:215] "Topology Admit Handler" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.431281 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.451065 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:00 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:00 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:00 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.451160 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.481406 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerStarted","Data":"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54"} Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.517054 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.517335 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563374 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563523 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.563608 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.587423 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.650425 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.650573 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672066 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672139 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.672199 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.681316 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.767383 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:00 crc kubenswrapper[4183]: I0813 20:00:00.831735 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"collect-profiles-29251920-wcws2\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.214016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354370 4183 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354432 4183 kuberuntime_image.go:55] "Failed to pull image" err="unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.16" Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354548 4183 kuberuntime_manager.go:1262] init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.16,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r7dbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000210000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rmwfn_openshift-marketplace(9ad279b4-d9dc-42a8-a1c8-a002bd063482): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication Aug 13 20:00:01 crc kubenswrapper[4183]: E0813 20:00:01.354595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.435662 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:01 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:01 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:01 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.437439 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:01 crc kubenswrapper[4183]: I0813 20:00:01.694507 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:02 crc kubenswrapper[4183]: E0813 20:00:02.212677 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.434541 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:02 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:02 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:02 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.434647 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.494456 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:02 crc kubenswrapper[4183]: I0813 20:00:02.683346 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerStarted","Data":"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a"} Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.435374 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:03 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:03 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:03 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.435498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.648682 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:03 crc kubenswrapper[4183]: I0813 20:00:03.649216 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.435246 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:04 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:04 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:04 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.435580 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872257 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872991 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873061 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.872265 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873415 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.873982 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.875079 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:00:04 crc kubenswrapper[4183]: I0813 20:00:04.875131 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" gracePeriod=2 Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.025423 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.026036 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.396987 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.434620 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:05 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:05 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:05 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.435185 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.716564 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" exitCode=0 Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.716715 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f"} Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.717008 4183 scope.go:117] "RemoveContainer" containerID="b4940961924b80341abc448ef2ef186a7af57fade4e32cd5feb2e52defb2d5f9" Aug 13 20:00:05 crc kubenswrapper[4183]: I0813 20:00:05.719698 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerStarted","Data":"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.435459 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:06 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:06 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:06 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.436133 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.650037 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.650225 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.730625 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerStarted","Data":"d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.731126 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.734101 4183 patch_prober.go:28] interesting pod/route-controller-manager-5b77f9fd48-hb8xt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.734194 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.735317 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerStarted","Data":"3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.741610 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.742420 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b"} Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.743332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.807511 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podStartSLOduration=12.807457808 podStartE2EDuration="12.807457808s" podCreationTimestamp="2025-08-13 19:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:06.802006483 +0000 UTC m=+973.494671341" watchObservedRunningTime="2025-08-13 20:00:06.807457808 +0000 UTC m=+973.500122546" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.823476 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.823671 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a0453d24-e872-43af-9e7a-86227c26d200" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.824558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.830140 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.830723 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.843831 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.844033 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.857413 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946207 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946359 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.946558 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:06 crc kubenswrapper[4183]: I0813 20:00:06.951349 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" podStartSLOduration=13.9512997 podStartE2EDuration="13.9512997s" podCreationTimestamp="2025-08-13 19:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:06.947395608 +0000 UTC m=+973.640060666" watchObservedRunningTime="2025-08-13 20:00:06.9512997 +0000 UTC m=+973.643964418" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.023143 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.049629 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.059444 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.444468 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:07 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:07 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:07 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.444561 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:07 crc kubenswrapper[4183]: I0813 20:00:07.597730 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.042742 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.440824 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:08 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:08 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:08 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:08 crc kubenswrapper[4183]: I0813 20:00:08.441453 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:09 crc kubenswrapper[4183]: E0813 20:00:09.211359 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.434143 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.434352 4183 topology_manager.go:215] "Topology Admit Handler" podUID="227e3650-2a85-4229-8099-bb53972635b2" podNamespace="openshift-kube-controller-manager" podName="installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.435408 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.436985 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:09 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:09 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:09 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.437129 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597139 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597291 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.597420 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699065 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699205 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699229 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:09 crc kubenswrapper[4183]: I0813 20:00:09.699398 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.137030 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:10 crc kubenswrapper[4183]: E0813 20:00:10.214874 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.218068 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.346719 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"installer-9-crc\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.444256 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:10 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:10 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:10 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.447014 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.514376 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.815665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818629 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818751 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.818898 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:10 crc kubenswrapper[4183]: I0813 20:00:10.832568 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerStarted","Data":"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786"} Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.408692 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.409538 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" containerID="cri-o://3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" gracePeriod=30 Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.446038 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:11 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:11 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:11 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.446320 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.657414 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.657694 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" containerID="cri-o://d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" gracePeriod=30 Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.839995 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:11 crc kubenswrapper[4183]: I0813 20:00:11.840697 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214330 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214469 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:12 crc kubenswrapper[4183]: E0813 20:00:12.214595 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.432418 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:12 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:12 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:12 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.432950 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.827582 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Aug 13 20:00:12 crc kubenswrapper[4183]: W0813 20:00:12.844932 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda0453d24_e872_43af_9e7a_86227c26d200.slice/crio-beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319 WatchSource:0}: Error finding container beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319: Status 404 returned error can't find the container with id beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.874373 4183 generic.go:334] "Generic (PLEG): container finished" podID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerID="d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" exitCode=0 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.874577 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerDied","Data":"d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c"} Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.882748 4183 generic.go:334] "Generic (PLEG): container finished" podID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerID="3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" exitCode=0 Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.883140 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerDied","Data":"3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc"} Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.884751 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:12 crc kubenswrapper[4183]: I0813 20:00:12.891048 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.077103 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" podStartSLOduration=13.077002107 podStartE2EDuration="13.077002107s" podCreationTimestamp="2025-08-13 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:13.063204943 +0000 UTC m=+979.755870041" watchObservedRunningTime="2025-08-13 20:00:13.077002107 +0000 UTC m=+979.769667125" Aug 13 20:00:13 crc kubenswrapper[4183]: E0813 20:00:13.215023 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.415704 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.444931 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.453029 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:13 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:13 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:13 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.453140 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:13 crc kubenswrapper[4183]: W0813 20:00:13.496289 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod227e3650_2a85_4229_8099_bb53972635b2.slice/crio-ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef WatchSource:0}: Error finding container ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef: Status 404 returned error can't find the container with id ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.942064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerStarted","Data":"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef"} Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.944612 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerStarted","Data":"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319"} Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.967043 4183 generic.go:334] "Generic (PLEG): container finished" podID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" exitCode=0 Aug 13 20:00:13 crc kubenswrapper[4183]: I0813 20:00:13.967120 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerDied","Data":"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786"} Aug 13 20:00:14 crc kubenswrapper[4183]: E0813 20:00:14.233752 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.437693 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:14 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:14 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:14 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.438231 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.873447 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.872215 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.874133 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.949658 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.949746 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.976380 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" event={"ID":"16f68e98-a8f9-417a-b92b-37bfd7b11e01","Type":"ContainerDied","Data":"4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54"} Aug 13 20:00:14 crc kubenswrapper[4183]: I0813 20:00:14.976449 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cfa6ec97b88dab6d16213f83b80b7667542c9da6b7b1c559cfe136cf9055f54" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.002072 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.103994 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104141 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104251 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.104408 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") pod \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\" (UID: \"16f68e98-a8f9-417a-b92b-37bfd7b11e01\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.105448 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca" (OuterVolumeSpecName: "client-ca") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.106161 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config" (OuterVolumeSpecName: "config") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.106630 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.144033 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.164398 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt" (OuterVolumeSpecName: "kube-api-access-rvvgt") pod "16f68e98-a8f9-417a-b92b-37bfd7b11e01" (UID: "16f68e98-a8f9-417a-b92b-37bfd7b11e01"). InnerVolumeSpecName "kube-api-access-rvvgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207183 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f68e98-a8f9-417a-b92b-37bfd7b11e01-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207266 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rvvgt\" (UniqueName: \"kubernetes.io/projected/16f68e98-a8f9-417a-b92b-37bfd7b11e01-kube-api-access-rvvgt\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207297 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207317 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.207334 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f68e98-a8f9-417a-b92b-37bfd7b11e01-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.440088 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:15 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:15 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:15 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.440501 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.687573 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.818880 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819048 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819085 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.819178 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") pod \"83bf0764-e80c-490b-8d3c-3cf626fdb233\" (UID: \"83bf0764-e80c-490b-8d3c-3cf626fdb233\") " Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.821131 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca" (OuterVolumeSpecName: "client-ca") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.821665 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config" (OuterVolumeSpecName: "config") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.829234 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72" (OuterVolumeSpecName: "kube-api-access-njx72") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "kube-api-access-njx72". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.839170 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83bf0764-e80c-490b-8d3c-3cf626fdb233" (UID: "83bf0764-e80c-490b-8d3c-3cf626fdb233"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920862 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-njx72\" (UniqueName: \"kubernetes.io/projected/83bf0764-e80c-490b-8d3c-3cf626fdb233-kube-api-access-njx72\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920931 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920954 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83bf0764-e80c-490b-8d3c-3cf626fdb233-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.920969 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83bf0764-e80c-490b-8d3c-3cf626fdb233-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988923 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" event={"ID":"83bf0764-e80c-490b-8d3c-3cf626fdb233","Type":"ContainerDied","Data":"13b18d12f5f999b55b87ab784455cad9666242a99651bc76e260b2a3672b215a"} Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988936 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4dd57946-mpxjt" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988992 4183 scope.go:117] "RemoveContainer" containerID="d5c73235c66ef57fa18c4f490c290086bd39214c316a1e20bac3ddba0b9ab23c" Aug 13 20:00:15 crc kubenswrapper[4183]: I0813 20:00:15.988982 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.341272 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.432894 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.433074 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.433126 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") pod \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\" (UID: \"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\") " Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.434291 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume" (OuterVolumeSpecName: "config-volume") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.439630 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c" (OuterVolumeSpecName: "kube-api-access-ctj8c") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "kube-api-access-ctj8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446259 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:16 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:16 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:16 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446463 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" (UID: "deaee4f4-7b7a-442d-99b7-c8ac62ef5f27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.446488 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543389 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543514 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:16 crc kubenswrapper[4183]: I0813 20:00:16.543544 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ctj8c\" (UniqueName: \"kubernetes.io/projected/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27-kube-api-access-ctj8c\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.006121 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerStarted","Data":"3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d"} Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013564 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" event={"ID":"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27","Type":"ContainerDied","Data":"eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348"} Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013619 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae823dac0e12a2bc5b77515bdd8c7d980ff451f9904af126e1e2453718ac348" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.013743 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.213161 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337281 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337432 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" podNamespace="openshift-kube-apiserver" podName="installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337602 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337620 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337640 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337653 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: E0813 20:00:17.337671 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.337716 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338220 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" containerName="collect-profiles" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338243 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" containerName="route-controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338255 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" containerName="controller-manager" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.338641 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.383506 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.384930 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.385493 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.404515 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.412347 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.448936 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:17 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:17 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:17 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.449427 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.486887 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487010 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487142 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487243 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.487684 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.520086 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.588519 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.588681 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.589416 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627075 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627262 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627423 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.627961 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.628068 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.628206 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697161 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697279 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697345 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.697383 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798655 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798729 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.798921 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.801515 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.802501 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.847371 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.914268 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:17 crc kubenswrapper[4183]: I0813 20:00:17.945291 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.077101 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerStarted","Data":"1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99"} Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.112972 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b77f9fd48-hb8xt"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.129067 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.155154 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"route-controller-manager-6cfd9fc8fc-7sbzw\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.213252 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.263518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.464305 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:18 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:18 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:18 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.464656 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.806627 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.806961 4183 topology_manager.go:215] "Topology Admit Handler" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" podNamespace="openshift-console" podName="console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.807928 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.869628 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.937734 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.937945 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938025 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938067 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938098 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938179 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.938207 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:18 crc kubenswrapper[4183]: I0813 20:00:18.951491 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.039936 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040041 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040075 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040179 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040204 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040248 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.040287 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.043475 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.057114 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.058261 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.062310 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.074712 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.088099 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.102692 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.203213 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c4dd57946-mpxjt"] Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.293347 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f68e98-a8f9-417a-b92b-37bfd7b11e01" path="/var/lib/kubelet/pods/16f68e98-a8f9-417a-b92b-37bfd7b11e01/volumes" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.308462 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83bf0764-e80c-490b-8d3c-3cf626fdb233" path="/var/lib/kubelet/pods/83bf0764-e80c-490b-8d3c-3cf626fdb233/volumes" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.426015 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"console-5d9678894c-wx62n\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.441234 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:19 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:19 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:19 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.441519 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:19 crc kubenswrapper[4183]: I0813 20:00:19.537411 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.223065 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.223268 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" podNamespace="openshift-controller-manager" podName="controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.224825 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.230713 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-9-crc" podStartSLOduration=11.230656964 podStartE2EDuration="11.230656964s" podCreationTimestamp="2025-08-13 20:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:20.208394449 +0000 UTC m=+986.901059297" watchObservedRunningTime="2025-08-13 20:00:20.230656964 +0000 UTC m=+986.923321692" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.253745 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254245 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254530 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.254737 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.255015 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.259287 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.288758 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.350073 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378405 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378560 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378654 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378685 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.378717 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.456205 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:20 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:20 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:20 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.456309 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.487569 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=14.487510238 podStartE2EDuration="14.487510238s" podCreationTimestamp="2025-08-13 20:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:20.487207379 +0000 UTC m=+987.179872367" watchObservedRunningTime="2025-08-13 20:00:20.487510238 +0000 UTC m=+987.180175056" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489643 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489816 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489878 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489918 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.489970 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.494680 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.504770 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.567351 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.568035 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.632650 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"controller-manager-67685c4459-7p2h8\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:20 crc kubenswrapper[4183]: I0813 20:00:20.870208 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.163475 4183 generic.go:334] "Generic (PLEG): container finished" podID="a0453d24-e872-43af-9e7a-86227c26d200" containerID="3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d" exitCode=0 Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.163712 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerDied","Data":"3e7eb9892d5a94b55021884eb7d6b9f29402769ffac497c2b86edb6618a7ef4d"} Aug 13 20:00:21 crc kubenswrapper[4183]: E0813 20:00:21.234485 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.442436 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:21 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:21 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:21 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:21 crc kubenswrapper[4183]: I0813 20:00:21.442512 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:22 crc kubenswrapper[4183]: I0813 20:00:22.447411 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:22 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:22 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:22 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:22 crc kubenswrapper[4183]: I0813 20:00:22.447973 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:23 crc kubenswrapper[4183]: E0813 20:00:23.214650 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Aug 13 20:00:23 crc kubenswrapper[4183]: E0813 20:00:23.214767 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.442020 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:23 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:23 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:23 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.442109 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:23 crc kubenswrapper[4183]: I0813 20:00:23.817439 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:23 crc kubenswrapper[4183]: W0813 20:00:23.846698 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1713e8bc_bab0_49a8_8618_9ded2e18906c.slice/crio-1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715 WatchSource:0}: Error finding container 1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715: Status 404 returned error can't find the container with id 1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715 Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.033654 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086096 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") pod \"a0453d24-e872-43af-9e7a-86227c26d200\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086222 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") pod \"a0453d24-e872-43af-9e7a-86227c26d200\" (UID: \"a0453d24-e872-43af-9e7a-86227c26d200\") " Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086428 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a0453d24-e872-43af-9e7a-86227c26d200" (UID: "a0453d24-e872-43af-9e7a-86227c26d200"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.086602 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a0453d24-e872-43af-9e7a-86227c26d200-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.096156 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a0453d24-e872-43af-9e7a-86227c26d200" (UID: "a0453d24-e872-43af-9e7a-86227c26d200"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.188626 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0453d24-e872-43af-9e7a-86227c26d200-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229861 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a0453d24-e872-43af-9e7a-86227c26d200","Type":"ContainerDied","Data":"beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319"} Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229921 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb700893f285f1004019874abdcd9484d578d674149630d4658c680e6991319" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.229949 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.237458 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerStarted","Data":"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715"} Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.274326 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.278576 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.293300 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.460858 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:24 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:24 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:24 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.460981 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.804322 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.871691 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.871820 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.873620 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.873700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.949736 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:24 crc kubenswrapper[4183]: I0813 20:00:24.949926 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.065141 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.213757 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.242361 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.243561 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.249038 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerStarted","Data":"6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251370 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerStarted","Data":"7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251428 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerStarted","Data":"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.251569 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" containerID="cri-o://7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" gracePeriod=30 Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.252282 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.258232 4183 patch_prober.go:28] interesting pod/controller-manager-67685c4459-7p2h8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.258715 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.262914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.262974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.271544 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerStarted","Data":"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8"} Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.442476 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:25 crc kubenswrapper[4183]: [-]has-synced failed: reason withheld Aug 13 20:00:25 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:25 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.442661 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.512090 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-console/console-5d9678894c-wx62n" podStartSLOduration=7.512029209 podStartE2EDuration="7.512029209s" podCreationTimestamp="2025-08-13 20:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.508307233 +0000 UTC m=+992.200972071" watchObservedRunningTime="2025-08-13 20:00:25.512029209 +0000 UTC m=+992.204694147" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.563868 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" podStartSLOduration=14.563758574 podStartE2EDuration="14.563758574s" podCreationTimestamp="2025-08-13 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.56185584 +0000 UTC m=+992.254520888" watchObservedRunningTime="2025-08-13 20:00:25.563758574 +0000 UTC m=+992.256423352" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.794333 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.794578 4183 topology_manager.go:215] "Topology Admit Handler" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" podNamespace="openshift-image-registry" podName="image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: E0813 20:00:25.797195 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.797239 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.797633 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0453d24-e872-43af-9e7a-86227c26d200" containerName="pruner" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.800477 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.946364 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.948726 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949007 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949154 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949303 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.949605 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.951486 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.959620 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podStartSLOduration=14.95954932 podStartE2EDuration="14.95954932s" podCreationTimestamp="2025-08-13 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:25.744372094 +0000 UTC m=+992.437037032" watchObservedRunningTime="2025-08-13 20:00:25.95954932 +0000 UTC m=+992.652214048" Aug 13 20:00:25 crc kubenswrapper[4183]: I0813 20:00:25.960208 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053304 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053466 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053568 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053600 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053652 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053699 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.053763 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.056353 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.057262 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.060476 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.072588 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.077750 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.095379 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.117737 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.226722 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.240942 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.324629 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.328329 4183 generic.go:334] "Generic (PLEG): container finished" podID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerID="7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" exitCode=2 Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.344900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.345270 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" containerID="cri-o://6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" gracePeriod=30 Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.350498 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerDied","Data":"7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7"} Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.352716 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.398176 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.459657 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.460573 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.461266 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.461434 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.478939 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479169 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479315 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.479434 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") pod \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\" (UID: \"c5bb4cdd-21b9-49ed-84ae-a405b60a0306\") " Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.480475 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.480720 4183 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.484328 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.478755 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.545169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk" (OuterVolumeSpecName: "kube-api-access-khtlk") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "kube-api-access-khtlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.578325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.579830 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589642 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-khtlk\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-kube-api-access-khtlk\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589706 4183 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-bound-sa-token\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589728 4183 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-tls\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589743 4183 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-registry-certificates\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.589861 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-trusted-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.607624 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.693416 4183 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5bb4cdd-21b9-49ed-84ae-a405b60a0306-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.719467 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "c5bb4cdd-21b9-49ed-84ae-a405b60a0306" (UID: "c5bb4cdd-21b9-49ed-84ae-a405b60a0306"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.743450 4183 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Aug 13 20:00:26 crc kubenswrapper[4183]: [+]has-synced ok Aug 13 20:00:26 crc kubenswrapper[4183]: [+]process-running ok Aug 13 20:00:26 crc kubenswrapper[4183]: healthz check failed Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.743560 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.795611 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.841473 4183 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.842278 4183 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount\"" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.857663 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:26 crc kubenswrapper[4183]: I0813 20:00:26.959176 4183 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podaf6b67a3-a2bd-4051-9adc-c208a5a65d79"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : Timed out while waiting for systemd to remove kubepods-burstable-podaf6b67a3_a2bd_4051_9adc_c208a5a65d79.slice" Aug 13 20:00:26 crc kubenswrapper[4183]: E0813 20:00:26.959342 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : unable to destroy cgroup paths for cgroup [kubepods burstable podaf6b67a3-a2bd-4051-9adc-c208a5a65d79] : Timed out while waiting for systemd to remove kubepods-burstable-podaf6b67a3_a2bd_4051_9adc_c208a5a65d79.slice" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" Aug 13 20:00:27 crc kubenswrapper[4183]: E0813 20:00:27.229118 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.381153 4183 generic.go:334] "Generic (PLEG): container finished" podID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerID="6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" exitCode=0 Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.386049 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.381490 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerDied","Data":"6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5"} Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.390105 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-585546dd8b-v5m4t" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.439866 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.444650 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.455221 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.503648 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c4dbb8899-tchz5"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.530253 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-7cbd5666ff-bbfrf\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.609055 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.614083 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.622491 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.623115 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.640968 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-585546dd8b-v5m4t"] Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.925967 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:27 crc kubenswrapper[4183]: I0813 20:00:27.964493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.216388 4183 patch_prober.go:28] interesting pod/route-controller-manager-6cfd9fc8fc-7sbzw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.216741 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.409079 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerStarted","Data":"7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763"} Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.500363 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.500303538 podStartE2EDuration="11.500303538s" podCreationTimestamp="2025-08-13 20:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:28.495690207 +0000 UTC m=+995.188354975" watchObservedRunningTime="2025-08-13 20:00:28.500303538 +0000 UTC m=+995.192968266" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.958488 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:28 crc kubenswrapper[4183]: I0813 20:00:28.958581 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.062890 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063091 4183 topology_manager.go:215] "Topology Admit Handler" podUID="00d32440-4cce-4609-96f3-51ac94480aab" podNamespace="openshift-controller-manager" podName="controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: E0813 20:00:29.063268 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063287 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063420 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" containerName="controller-manager" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.063968 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072336 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072441 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072480 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072519 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.072558 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") pod \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\" (UID: \"a560ec6a-586f-403c-a08e-e3a76fa1b7fd\") " Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.074365 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca" (OuterVolumeSpecName: "client-ca") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.075255 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config" (OuterVolumeSpecName: "config") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.075384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.097608 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.098220 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6" (OuterVolumeSpecName: "kube-api-access-5w8t6") pod "a560ec6a-586f-403c-a08e-e3a76fa1b7fd" (UID: "a560ec6a-586f-403c-a08e-e3a76fa1b7fd"). InnerVolumeSpecName "kube-api-access-5w8t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175480 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175590 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175748 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.175897 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176096 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176150 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176166 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5w8t6\" (UniqueName: \"kubernetes.io/projected/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-kube-api-access-5w8t6\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176182 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176199 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.176210 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a560ec6a-586f-403c-a08e-e3a76fa1b7fd-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.227261 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6b67a3-a2bd-4051-9adc-c208a5a65d79" path="/var/lib/kubelet/pods/af6b67a3-a2bd-4051-9adc-c208a5a65d79/volumes" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.238069 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5bb4cdd-21b9-49ed-84ae-a405b60a0306" path="/var/lib/kubelet/pods/c5bb4cdd-21b9-49ed-84ae-a405b60a0306/volumes" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.277915 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278005 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278102 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.278165 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.280764 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.289748 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.303540 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.297027 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.446095 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-67685c4459-7p2h8_a560ec6a-586f-403c-a08e-e3a76fa1b7fd/controller-manager/0.log" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.447603 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.448594 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67685c4459-7p2h8" event={"ID":"a560ec6a-586f-403c-a08e-e3a76fa1b7fd","Type":"ContainerDied","Data":"51aea926a857cd455ca0d021b49fa37618de4d0422d7dc1eb122be83f78ae2aa"} Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.448637 4183 scope.go:117] "RemoveContainer" containerID="7772cfe77a9084a8b1da62b48709afa4195652cf6fbe8e33fe7a5414394f71e7" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.534635 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"controller-manager-78589965b8-vmcwt\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.542744 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.547562 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.580460 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.580551 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.727692 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.759209 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.892572 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:29 crc kubenswrapper[4183]: I0813 20:00:29.908205 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67685c4459-7p2h8"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.302407 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435154 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435222 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435287 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.435338 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") pod \"1713e8bc-bab0-49a8-8618-9ded2e18906c\" (UID: \"1713e8bc-bab0-49a8-8618-9ded2e18906c\") " Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.438191 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config" (OuterVolumeSpecName: "config") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.443688 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca" (OuterVolumeSpecName: "client-ca") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.458748 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.496356 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb" (OuterVolumeSpecName: "kube-api-access-9qgvb") pod "1713e8bc-bab0-49a8-8618-9ded2e18906c" (UID: "1713e8bc-bab0-49a8-8618-9ded2e18906c"). InnerVolumeSpecName "kube-api-access-9qgvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.523608 4183 generic.go:334] "Generic (PLEG): container finished" podID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" containerID="c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6" exitCode=0 Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.523720 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerDied","Data":"c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6"} Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.524488 4183 scope.go:117] "RemoveContainer" containerID="c39ec2f009f84a11146853eb53b1073037d39ef91f4d853abf6b613d7e2758e6" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538585 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538648 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1713e8bc-bab0-49a8-8618-9ded2e18906c-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538667 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9qgvb\" (UniqueName: \"kubernetes.io/projected/1713e8bc-bab0-49a8-8618-9ded2e18906c-kube-api-access-9qgvb\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.538681 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713e8bc-bab0-49a8-8618-9ded2e18906c-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.546888 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" event={"ID":"1713e8bc-bab0-49a8-8618-9ded2e18906c","Type":"ContainerDied","Data":"1f55b781eeb63db4da6e3bc3852aae7ae0cefc781245125be87fc29e75ead715"} Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.547014 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.547043 4183 scope.go:117] "RemoveContainer" containerID="6f473c92f07e1c47edf5b8e65134aeb43315eb0c72514a8b4132da92f81b1fe5" Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.863030 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.873688 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.902979 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cfd9fc8fc-7sbzw"] Aug 13 20:00:30 crc kubenswrapper[4183]: I0813 20:00:30.987534 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Aug 13 20:00:30 crc kubenswrapper[4183]: W0813 20:00:30.987941 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a7bc46_2f44_4aff_9cb5_97c97a4a8319.slice/crio-7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e WatchSource:0}: Error finding container 7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e: Status 404 returned error can't find the container with id 7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.086667 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:00:31 crc kubenswrapper[4183]: W0813 20:00:31.106958 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d32440_4cce_4609_96f3_51ac94480aab.slice/crio-97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9 WatchSource:0}: Error finding container 97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9: Status 404 returned error can't find the container with id 97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9 Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.228752 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" path="/var/lib/kubelet/pods/1713e8bc-bab0-49a8-8618-9ded2e18906c/volumes" Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.230549 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a560ec6a-586f-403c-a08e-e3a76fa1b7fd" path="/var/lib/kubelet/pods/a560ec6a-586f-403c-a08e-e3a76fa1b7fd/volumes" Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.586239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerStarted","Data":"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4"} Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.596368 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerStarted","Data":"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9"} Aug 13 20:00:31 crc kubenswrapper[4183]: I0813 20:00:31.624983 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"7356b549b0982e9c27e0a88782d3f3e7496dc427a4624d350543676e28d5f73e"} Aug 13 20:00:32 crc kubenswrapper[4183]: E0813 20:00:32.222479 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.16\\\"\"" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" Aug 13 20:00:32 crc kubenswrapper[4183]: I0813 20:00:32.647092 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"e95a2bd82003b18d4f81fa9d98e21982ecce835638a4f389a02f1c7db1efd2d6"} Aug 13 20:00:33 crc kubenswrapper[4183]: E0813 20:00:33.233310 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.403280 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.403521 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: E0813 20:00:33.411971 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.412025 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.412233 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="1713e8bc-bab0-49a8-8618-9ded2e18906c" containerName="route-controller-manager" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.413558 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435584 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435944 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435598 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.436371 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.435720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.445125 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.511701 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515590 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515713 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515757 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.515908 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618536 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.618569 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.620528 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.620550 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.636224 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.656596 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerStarted","Data":"32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.669757 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerStarted","Data":"71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.672107 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.686249 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.686351 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.687119 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53"} Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.688349 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.881044 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"route-controller-manager-846977c6bc-7gjhh\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:33 crc kubenswrapper[4183]: I0813 20:00:33.989830 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podStartSLOduration=35619978.989690684 podStartE2EDuration="9894h26m18.989690681s" podCreationTimestamp="2024-06-27 13:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:33.978483142 +0000 UTC m=+1000.671147910" watchObservedRunningTime="2025-08-13 20:00:33.989690681 +0000 UTC m=+1000.682355409" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.051124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.153396 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podStartSLOduration=10.153340467 podStartE2EDuration="10.153340467s" podCreationTimestamp="2025-08-13 20:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:34.152671758 +0000 UTC m=+1000.845336576" watchObservedRunningTime="2025-08-13 20:00:34.153340467 +0000 UTC m=+1000.846005335" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.752986 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerDied","Data":"cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014"} Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.755623 4183 scope.go:117] "RemoveContainer" containerID="cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.778290 4183 generic.go:334] "Generic (PLEG): container finished" podID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" containerID="cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014" exitCode=0 Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.784930 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.811093 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.876467 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.877102 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.877160 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.878764 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.878979 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" gracePeriod=2 Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.883544 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.883678 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.884083 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.884124 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.949186 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:34 crc kubenswrapper[4183]: I0813 20:00:34.949289 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.099506 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podStartSLOduration=10.091607161 podStartE2EDuration="10.091607161s" podCreationTimestamp="2025-08-13 20:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:00:34.228453009 +0000 UTC m=+1000.921117757" watchObservedRunningTime="2025-08-13 20:00:35.091607161 +0000 UTC m=+1001.784272259" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793329 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793791 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" containerID="47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069" exitCode=1 Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.793984 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerDied","Data":"47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069"} Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.794920 4183 scope.go:117] "RemoveContainer" containerID="47802e2c3506925156013fb9ab1b2e35c0b10d40b6540eabeb02eed57b691069" Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.802757 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" exitCode=0 Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.804097 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a"} Aug 13 20:00:35 crc kubenswrapper[4183]: I0813 20:00:35.804154 4183 scope.go:117] "RemoveContainer" containerID="f644dddfd8fc5546a8b056ca1431e4924ae5d29333100579d5e0759c289e206f" Aug 13 20:00:36 crc kubenswrapper[4183]: E0813 20:00:36.213445 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.16\\\"\"" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.534373 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.810824 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerStarted","Data":"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e"} Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.955501 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:00:36 crc kubenswrapper[4183]: I0813 20:00:36.958703 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-9-crc" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" containerID="cri-o://1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" gracePeriod=30 Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.119038 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.119167 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" podNamespace="openshift-kube-scheduler" podName="installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.120818 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.138623 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.147529 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-9ln8g" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150315 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150644 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.150879 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.238027 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.253661 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.253867 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.254054 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.261225 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.261665 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.605668 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"installer-7-crc\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.792007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.804994 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.814754 4183 topology_manager.go:215] "Topology Admit Handler" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.816472 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.880656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.880746 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.983580 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.983635 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:37 crc kubenswrapper[4183]: I0813 20:00:37.984187 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:38 crc kubenswrapper[4183]: I0813 20:00:38.454577 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: E0813 20:00:39.390016 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.568118 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.569114 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.582126 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.696974 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.697465 4183 topology_manager.go:215] "Topology Admit Handler" podUID="79050916-d488-4806-b556-1b0078b31e53" podNamespace="openshift-kube-controller-manager" podName="installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.700363 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.753930 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" containerID="cri-o://0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" gracePeriod=14 Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810566 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810673 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.810716 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.831172 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944011 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944184 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944573 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:39 crc kubenswrapper[4183]: I0813 20:00:39.944690 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:40 crc kubenswrapper[4183]: I0813 20:00:40.096732 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:00:40 crc kubenswrapper[4183]: I0813 20:00:40.416091 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"installer-10-crc\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.385622 4183 generic.go:334] "Generic (PLEG): container finished" podID="13ad7555-5f28-4555-a563-892713a8433a" containerID="0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" exitCode=0 Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.386137 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerDied","Data":"0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1"} Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.401449 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-mtx25"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.410324 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" containerID="cri-o://a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" gracePeriod=90 Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.411028 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" gracePeriod=90 Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.422041 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.16\\\"\"" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.458973 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-mtx25"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.702243 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703251 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b23d6435-6431-4905-b41b-a517327385e5" podNamespace="openshift-apiserver" podName="apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.703572 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703675 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.703766 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.703958 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: E0813 20:00:41.704089 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="fix-audit-permissions" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704172 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="fix-audit-permissions" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704371 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.704486 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerName="openshift-apiserver-check-endpoints" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.705521 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.738116 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834000 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834386 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834513 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834694 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.834930 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835076 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835192 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835300 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835453 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835576 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.835753 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.939227 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.970617 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.971536 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974603 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974710 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.974774 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975084 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975197 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975283 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975327 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975403 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.975474 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.979601 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.980346 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.980404 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:41 crc kubenswrapper[4183]: I0813 20:00:41.994656 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.001627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.003866 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.016768 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.070052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.084201 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.107393 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.354144 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:00:42 crc kubenswrapper[4183]: I0813 20:00:42.892229 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" exitCode=0 Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.240716 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336192 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336353 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336387 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336422 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336463 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336507 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336572 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336627 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336677 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336719 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336762 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336889 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.336935 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") pod \"13ad7555-5f28-4555-a563-892713a8433a\" (UID: \"13ad7555-5f28-4555-a563-892713a8433a\") " Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.342265 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.358965 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.362115 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.362656 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.363739 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.380757 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68" (OuterVolumeSpecName: "kube-api-access-w4r68") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "kube-api-access-w4r68". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.411029 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412205 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412924 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.412973 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.414127 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.421348 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.424319 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.427660 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "13ad7555-5f28-4555-a563-892713a8433a" (UID: "13ad7555-5f28-4555-a563-892713a8433a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439072 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13ad7555-5f28-4555-a563-892713a8433a-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439131 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439151 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439165 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439179 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439193 4183 reconciler_common.go:300] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-audit-policies\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439206 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439219 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439231 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439245 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439258 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439272 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439283 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w4r68\" (UniqueName: \"kubernetes.io/projected/13ad7555-5f28-4555-a563-892713a8433a-kube-api-access-w4r68\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:43 crc kubenswrapper[4183]: I0813 20:00:43.439296 4183 reconciler_common.go:300] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13ad7555-5f28-4555-a563-892713a8433a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.005191 4183 generic.go:334] "Generic (PLEG): container finished" podID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" containerID="346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2" exitCode=0 Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.005354 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerDied","Data":"346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2"} Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.006295 4183 scope.go:117] "RemoveContainer" containerID="346fc13eab4a6442e7eb6bb7019dac9a1216274ae59cd519b5e7474a1dd1b4e2" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.074016 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" event={"ID":"13ad7555-5f28-4555-a563-892713a8433a","Type":"ContainerDied","Data":"8266ab3300c992b59b23d4fcd1c7a7c7c8c97e041b449a5bbd87fb5e57084141"} Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.074906 4183 scope.go:117] "RemoveContainer" containerID="0c7b53a35a67b2526c5310571264cb255c68a5ac90b79fcfed3ea524243646e1" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.075503 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-765b47f944-n2lhl" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.871563 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.871677 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.884710 4183 patch_prober.go:28] interesting pod/authentication-operator-7cc7ff75d5-g9qv8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.884925 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.952264 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:44 crc kubenswrapper[4183]: I0813 20:00:44.953407 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.272608 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.503656 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"apiserver-67cbf64bc9-jjfds\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.603890 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"f8740679d62a596414a4bace5b51c52a61eb8997cb3aad74b6e37fb0898cbd9a"} Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.663716 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.788531 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.872562 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Aug 13 20:00:45 crc kubenswrapper[4183]: I0813 20:00:45.899327 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.265636 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266229 4183 topology_manager.go:215] "Topology Admit Handler" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" podNamespace="openshift-authentication" podName="oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: E0813 20:00:46.266462 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266482 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.266635 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ad7555-5f28-4555-a563-892713a8433a" containerName="oauth-openshift" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.284461 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.369983 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.378862 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379339 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379608 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.379753 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380041 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380171 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380307 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.380571 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.385252 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.385923 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.386294 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.414543 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.454696 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.455345 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.463164 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.463969 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.466214 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.471661 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472147 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472334 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472521 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.472656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.467659 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.474041 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.507414 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.511328 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576295 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576402 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576476 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576508 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576539 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576562 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576589 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576621 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576650 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576690 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.576717 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.583259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592742 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592943 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.592999 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.636523 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.647947 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.648016 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.649387 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.683061 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.689520 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.733286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.736500 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.750459 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.753396 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.761600 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.790375 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.799700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.820428 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.820881 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.891525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.956583 4183 generic.go:334] "Generic (PLEG): container finished" podID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" containerID="2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039" exitCode=0 Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.956890 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerDied","Data":"2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039"} Aug 13 20:00:46 crc kubenswrapper[4183]: I0813 20:00:46.957877 4183 scope.go:117] "RemoveContainer" containerID="2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.161170 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerStarted","Data":"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab"} Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.176297 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-765b47f944-n2lhl"] Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.185972 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.304578 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.400373 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ad7555-5f28-4555-a563-892713a8433a" path="/var/lib/kubelet/pods/13ad7555-5f28-4555-a563-892713a8433a/volumes" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.558469 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerStarted","Data":"417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3"} Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.560090 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:47 crc kubenswrapper[4183]: I0813 20:00:47.837463 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerStarted","Data":"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.067045 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-686c6c748c-qbnnr_9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7/kube-storage-version-migrator-operator/0.log" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.067940 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"043a876882e6525ddc5f76decf1b6c920a7b88ce28a2364941d8f877fa66e241"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.239693 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.501762 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.519739 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.519982 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.520026 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:48 crc kubenswrapper[4183]: I0813 20:00:48.607341 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerStarted","Data":"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc"} Aug 13 20:00:49 crc kubenswrapper[4183]: I0813 20:00:49.547720 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:49 crc kubenswrapper[4183]: I0813 20:00:49.549557 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:51 crc kubenswrapper[4183]: I0813 20:00:51.371645 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:51 crc kubenswrapper[4183]: I0813 20:00:51.372722 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696048 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696731 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696861 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696908 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.696966 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.881030 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.882103 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.881030 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.882186 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.884295 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.952035 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:00:54 crc kubenswrapper[4183]: I0813 20:00:54.954131 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.205724 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.978620 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.981442 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerDied","Data":"1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99"} Aug 13 20:00:55 crc kubenswrapper[4183]: I0813 20:00:55.986820 4183 generic.go:334] "Generic (PLEG): container finished" podID="227e3650-2a85-4229-8099-bb53972635b2" containerID="1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" exitCode=1 Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700337 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]log ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:00:56 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:00:56 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:00:56 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:00:56 crc kubenswrapper[4183]: readyz check failed Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700486 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:00:56 crc kubenswrapper[4183]: I0813 20:00:56.700620 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:00:57 crc kubenswrapper[4183]: I0813 20:00:57.632304 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:00:58 crc kubenswrapper[4183]: I0813 20:00:58.184180 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.540555 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.541338 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.540701 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"524f541503e673b38ef89e50d9e4dfc8448cecf293a683f236b94f15ea56379f"} Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.623278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"d21952f722a78650eafeaffd3eee446ec3e6f45e0e0dff0fde9b755169ca68a0"} Aug 13 20:00:59 crc kubenswrapper[4183]: I0813 20:00:59.986334 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b"] Aug 13 20:01:00 crc kubenswrapper[4183]: I0813 20:01:00.033563 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:01:00 crc kubenswrapper[4183]: W0813 20:01:00.559067 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23d6435_6431_4905_b41b_a517327385e5.slice/crio-411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58 WatchSource:0}: Error finding container 411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58: Status 404 returned error can't find the container with id 411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58 Aug 13 20:01:00 crc kubenswrapper[4183]: W0813 20:01:00.777733 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01feb2e0_a0f4_4573_8335_34e364e0ef40.slice/crio-ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404 WatchSource:0}: Error finding container ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404: Status 404 returned error can't find the container with id ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404 Aug 13 20:01:01 crc kubenswrapper[4183]: I0813 20:01:01.334242 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58"} Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.077330 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.079077 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.701589 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-9-crc_227e3650-2a85-4229-8099-bb53972635b2/installer/0.log" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702169 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-9-crc" event={"ID":"227e3650-2a85-4229-8099-bb53972635b2","Type":"ContainerDied","Data":"ca267bd7a205181e470f424d652801f7ec40bf5a8c5b2880b6cf133cd7e518ef"} Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702390 4183 scope.go:117] "RemoveContainer" containerID="1bbed3b469cb62a0e76b6e9718249f34f00007dc9f9e6dcd22b346fb357ece99" Aug 13 20:01:02 crc kubenswrapper[4183]: I0813 20:01:02.702657 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-9-crc" Aug 13 20:01:03 crc kubenswrapper[4183]: I0813 20:01:03.198645 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"ca33bd29c9a026f2de2ac8dc0aaa5c02eb359b8d1ced732874be833c45043404"} Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.873700 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.874405 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.876409 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.876497 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.949495 4183 patch_prober.go:28] interesting pod/console-84fccc7b6-mkncc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Aug 13 20:01:04 crc kubenswrapper[4183]: I0813 20:01:04.949643 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.275984 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:05 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:05 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:05 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:05 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.276114 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:05 crc kubenswrapper[4183]: I0813 20:01:05.481071 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.005457 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.006124 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.006301 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") pod \"227e3650-2a85-4229-8099-bb53972635b2\" (UID: \"227e3650-2a85-4229-8099-bb53972635b2\") " Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.010689 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock" (OuterVolumeSpecName: "var-lock") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.010732 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.032166 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "227e3650-2a85-4229-8099-bb53972635b2" (UID: "227e3650-2a85-4229-8099-bb53972635b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.108676 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.108732 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/227e3650-2a85-4229-8099-bb53972635b2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:06 crc kubenswrapper[4183]: I0813 20:01:06.120371 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/227e3650-2a85-4229-8099-bb53972635b2-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:07 crc kubenswrapper[4183]: I0813 20:01:07.572965 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podStartSLOduration=42.572913908 podStartE2EDuration="42.572913908s" podCreationTimestamp="2025-08-13 20:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:06.779492984 +0000 UTC m=+1033.472157982" watchObservedRunningTime="2025-08-13 20:01:07.572913908 +0000 UTC m=+1034.265578806" Aug 13 20:01:07 crc kubenswrapper[4183]: I0813 20:01:07.619329 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerStarted","Data":"c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83"} Aug 13 20:01:08 crc kubenswrapper[4183]: I0813 20:01:08.424319 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerStarted","Data":"e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5"} Aug 13 20:01:08 crc kubenswrapper[4183]: I0813 20:01:08.733261 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-7cbd5666ff-bbfrf"] Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.200767 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.201015 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.316197 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84fccc7b6-mkncc"] Aug 13 20:01:10 crc kubenswrapper[4183]: E0813 20:01:10.498578 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod2f155735_a9be_4621_a5f2_5ab4b6957acd.slice/crio-e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod2f155735_a9be_4621_a5f2_5ab4b6957acd.slice/crio-conmon-e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5.scope\": RecentStats: unable to find data in memory cache]" Aug 13 20:01:10 crc kubenswrapper[4183]: I0813 20:01:10.967968 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerStarted","Data":"f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721"} Aug 13 20:01:12 crc kubenswrapper[4183]: I0813 20:01:12.209177 4183 generic.go:334] "Generic (PLEG): container finished" podID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerID="e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5" exitCode=0 Aug 13 20:01:12 crc kubenswrapper[4183]: I0813 20:01:12.209422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerDied","Data":"e7256098c4244337df430457265e378ddf1b268c176bafd4d6fa5a52a80adfe5"} Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.357581 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.357749 4183 topology_manager.go:215] "Topology Admit Handler" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" podNamespace="openshift-console" podName="console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: E0813 20:01:14.358204 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.358223 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.358394 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="227e3650-2a85-4229-8099-bb53972635b2" containerName="installer" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.359130 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485496 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485604 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485650 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485691 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485735 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485888 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.485974 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.589709 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.591564 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.591746 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.593750 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.593991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.594177 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.594646 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.602313 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.603191 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.608153 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.609463 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.612142 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.612556 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872504 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872632 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872695 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874520 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874583 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" gracePeriod=2 Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.872512 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.874887 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.876616 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.876700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985882 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985943 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985989 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 20:01:14 crc kubenswrapper[4183]: I0813 20:01:14.985997 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.667879 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:16 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:16 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:16 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:16 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.668083 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.668168 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:16 crc kubenswrapper[4183]: I0813 20:01:16.745284 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"391bd49947a0ae3e13b214a022dc7f8ebc8a0337699d428008fe902a18d050a6"} Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159036 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159333 4183 generic.go:334] "Generic (PLEG): container finished" podID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerID="47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239" exitCode=1 Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159362 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerDied","Data":"47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239"} Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.159818 4183 scope.go:117] "RemoveContainer" containerID="47f4fe3d214f9afb61d4c14f1173afecfd243739000ced3d025f9611dbfd4239" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.614687 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.673898 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") pod \"2f155735-a9be-4621-a5f2-5ab4b6957acd\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.674125 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") pod \"2f155735-a9be-4621-a5f2-5ab4b6957acd\" (UID: \"2f155735-a9be-4621-a5f2-5ab4b6957acd\") " Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.674669 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2f155735-a9be-4621-a5f2-5ab4b6957acd" (UID: "2f155735-a9be-4621-a5f2-5ab4b6957acd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.720762 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2f155735-a9be-4621-a5f2-5ab4b6957acd" (UID: "2f155735-a9be-4621-a5f2-5ab4b6957acd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.776045 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f155735-a9be-4621-a5f2-5ab4b6957acd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.776112 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f155735-a9be-4621-a5f2-5ab4b6957acd-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:01:17 crc kubenswrapper[4183]: I0813 20:01:17.947235 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.410224 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3" exitCode=0 Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.410384 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3"} Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.613964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"2f155735-a9be-4621-a5f2-5ab4b6957acd","Type":"ContainerDied","Data":"c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5"} Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.615688 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c05ff35bd00034fcfab3a644cd84bcb84bc4a9c535bd6172e2012a7d16ea6eb5" Aug 13 20:01:18 crc kubenswrapper[4183]: I0813 20:01:18.615583 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Aug 13 20:01:19 crc kubenswrapper[4183]: I0813 20:01:19.540752 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:19 crc kubenswrapper[4183]: I0813 20:01:19.541070 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010289 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" exitCode=0 Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010422 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5"} Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.010464 4183 scope.go:117] "RemoveContainer" containerID="50e7a71dc2a39377a3d66cf968c9c75001c5782d329877e2aeb63cfbd66e826a" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.134694 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.312504 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.355962 4183 generic.go:334] "Generic (PLEG): container finished" podID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" containerID="20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687" exitCode=0 Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.359304 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerDied","Data":"20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687"} Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.359386 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.360392 4183 scope.go:117] "RemoveContainer" containerID="20a713ea366c19c1b427548e8b8ab979d2ae1d350c086fe1a4874181f4de7687" Aug 13 20:01:20 crc kubenswrapper[4183]: I0813 20:01:20.468060 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.024540 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.602986 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78589965b8-vmcwt"] Aug 13 20:01:21 crc kubenswrapper[4183]: I0813 20:01:21.603405 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" containerID="cri-o://71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" gracePeriod=30 Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.206371 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.468707 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-5dbbc74dc9-cp5cd_e9127708-ccfd-4891-8a3a-f0cacb77e0f4/console-operator/0.log" Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.471111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"de440c5d69c49e4ae9a6d8d6a8c21cebc200a69199b6854aa7edf579fd041ccd"} Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.472858 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.565665 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh"] Aug 13 20:01:22 crc kubenswrapper[4183]: I0813 20:01:22.565985 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" containerID="cri-o://417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" gracePeriod=30 Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.396139 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-9-crc"] Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.473329 4183 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.473426 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:23 crc kubenswrapper[4183]: I0813 20:01:23.625377 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-644bb77b49-5x5xk"] Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.053119 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.053229 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: W0813 20:01:24.084861 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e649ef6_bbda_4ad9_8a09_ac3803dd0cc1.slice/crio-48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107 WatchSource:0}: Error finding container 48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107: Status 404 returned error can't find the container with id 48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.294535 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.295758 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.295918 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.297091 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.668324 4183 generic.go:334] "Generic (PLEG): container finished" podID="00d32440-4cce-4609-96f3-51ac94480aab" containerID="71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" exitCode=0 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.668470 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerDied","Data":"71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.871746 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.872426 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.871878 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.872488 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.896333 4183 generic.go:334] "Generic (PLEG): container finished" podID="71af81a9-7d43-49b2-9287-c375900aa905" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" exitCode=0 Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.897921 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerDied","Data":"e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e"} Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.898721 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:01:24 crc kubenswrapper[4183]: I0813 20:01:24.909362 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Aug 13 20:01:25 crc kubenswrapper[4183]: I0813 20:01:25.425912 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227e3650-2a85-4229-8099-bb53972635b2" path="/var/lib/kubelet/pods/227e3650-2a85-4229-8099-bb53972635b2/volumes" Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.201431 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.469691 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerDied","Data":"417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.470093 4183 generic.go:334] "Generic (PLEG): container finished" podID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerID="417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" exitCode=0 Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.805506 4183 generic.go:334] "Generic (PLEG): container finished" podID="b54e8941-2fc4-432a-9e51-39684df9089e" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" exitCode=0 Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.805810 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerDied","Data":"dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540"} Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.806954 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.807062 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:26 crc kubenswrapper[4183]: I0813 20:01:26.807600 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.650207 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.650662 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.653706 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:27 crc kubenswrapper[4183]: I0813 20:01:27.654104 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:28 crc kubenswrapper[4183]: I0813 20:01:28.295104 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"2af5bb0c4b139d706151f3201c47d8cc989a3569891ca64ddff1c17afff77399"} Aug 13 20:01:29 crc kubenswrapper[4183]: I0813 20:01:29.540695 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:29 crc kubenswrapper[4183]: I0813 20:01:29.541479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.649538 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.650102 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.649680 4183 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.650213 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.732117 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:30 crc kubenswrapper[4183]: I0813 20:01:30.732259 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.296466 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a"} Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307275 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:31 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:31 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:31 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:31 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307529 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.307770 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:31 crc kubenswrapper[4183]: I0813 20:01:31.525000 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.590474 4183 generic.go:334] "Generic (PLEG): container finished" podID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" exitCode=0 Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.591013 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerDied","Data":"de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220"} Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.591986 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.798229 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-77658b5b66-dq5sc_530553aa-0a1d-423e-8a22-f5eb4bdbb883/openshift-config-operator/0.log" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799503 4183 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" exitCode=0 Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799574 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b"} Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.799630 4183 scope.go:117] "RemoveContainer" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" Aug 13 20:01:32 crc kubenswrapper[4183]: I0813 20:01:32.800480 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:01:33 crc kubenswrapper[4183]: I0813 20:01:33.649066 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:01:33 crc kubenswrapper[4183]: I0813 20:01:33.649137 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873292 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873437 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873433 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:34 crc kubenswrapper[4183]: I0813 20:01:34.873679 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.052072 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.052240 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.307817 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-7-crc" podStartSLOduration=58.307555991 podStartE2EDuration="58.307555991s" podCreationTimestamp="2025-08-13 20:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:35.303077173 +0000 UTC m=+1061.995741941" watchObservedRunningTime="2025-08-13 20:01:35.307555991 +0000 UTC m=+1062.000220839" Aug 13 20:01:35 crc kubenswrapper[4183]: I0813 20:01:35.309160 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-10-crc" podStartSLOduration=56.309123315 podStartE2EDuration="56.309123315s" podCreationTimestamp="2025-08-13 20:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:27.138539278 +0000 UTC m=+1053.831204276" watchObservedRunningTime="2025-08-13 20:01:35.309123315 +0000 UTC m=+1062.001788104" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.078709 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" containerID="cri-o://32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" gracePeriod=28 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.273056 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-84fccc7b6-mkncc" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" containerID="cri-o://a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" gracePeriod=15 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668612 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:36 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:36 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:36 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:36 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668747 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.668916 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890298 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890423 4183 generic.go:334] "Generic (PLEG): container finished" podID="0f394926-bdb9-425c-b36e-264d7fd34550" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" exitCode=1 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.890579 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerDied","Data":"30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d"} Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.891407 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895752 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895915 4183 generic.go:334] "Generic (PLEG): container finished" podID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerID="a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" exitCode=2 Aug 13 20:01:36 crc kubenswrapper[4183]: I0813 20:01:36.895953 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerDied","Data":"a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9"} Aug 13 20:01:37 crc kubenswrapper[4183]: I0813 20:01:37.616220 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:37 crc kubenswrapper[4183]: I0813 20:01:37.616433 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.540023 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.540131 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.995494 4183 generic.go:334] "Generic (PLEG): container finished" podID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerID="32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" exitCode=0 Aug 13 20:01:39 crc kubenswrapper[4183]: I0813 20:01:39.995692 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerDied","Data":"32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84"} Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.005343 4183 generic.go:334] "Generic (PLEG): container finished" podID="cc291782-27d2-4a74-af79-c7dcb31535d2" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" exitCode=0 Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.005439 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerDied","Data":"ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce"} Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.006541 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.729951 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:40 crc kubenswrapper[4183]: I0813 20:01:40.730089 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.098301 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d67253e-2acd-4bc1-8185-793587da4f17" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" exitCode=0 Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.098414 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerDied","Data":"de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc"} Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.099636 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.872298 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.872449 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873231 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873354 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.873415 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.875268 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.875340 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" gracePeriod=2 Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.876252 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.876316 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.991710 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:44 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:44 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:44 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:44 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:44 crc kubenswrapper[4183]: I0813 20:01:44.993555 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:45 crc kubenswrapper[4183]: I0813 20:01:45.053241 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:45 crc kubenswrapper[4183]: I0813 20:01:45.053396 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.001768 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:47 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:47 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:47 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:47 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.002276 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.615729 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:47 crc kubenswrapper[4183]: I0813 20:01:47.616442 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.245860 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:49 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:49 crc kubenswrapper[4183]: livez check failed Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.246065 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.540146 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:49 crc kubenswrapper[4183]: I0813 20:01:49.540335 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.580248 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.580359 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.729450 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:50 crc kubenswrapper[4183]: I0813 20:01:50.729579 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.008964 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:52 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:52 crc kubenswrapper[4183]: [+]api-openshift-apiserver-available ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]api-openshift-oauth-apiserver-available ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:52 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:52 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.011833 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.012278 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.362490 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.486931 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.487071 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" exitCode=1 Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.487115 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerDied","Data":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} Aug 13 20:01:52 crc kubenswrapper[4183]: I0813 20:01:52.489136 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.149519 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.513200 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" exitCode=0 Aug 13 20:01:53 crc kubenswrapper[4183]: I0813 20:01:53.513465 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24"} Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.654140 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.654271 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.662178 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:54 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:54 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:54 crc kubenswrapper[4183]: healthz check failed Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.662334 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697708 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697940 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.697999 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.872519 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:01:54 crc kubenswrapper[4183]: I0813 20:01:54.872695 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:01:55 crc kubenswrapper[4183]: I0813 20:01:55.052469 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:01:55 crc kubenswrapper[4183]: I0813 20:01:55.052615 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:01:56 crc kubenswrapper[4183]: I0813 20:01:56.187358 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:56 crc kubenswrapper[4183]: [-]etcd failed: reason withheld Aug 13 20:01:56 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]api-openshift-apiserver-available ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]api-openshift-oauth-apiserver-available ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-api-request-count-filter ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startkubeinformers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-admission-initializer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-consumer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-filter ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-apiextensions-controllers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/crd-informer-synced ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-service-ip-repair-controllers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/rbac/bootstrap-roles ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/priority-and-fairness-config-producer ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-system-namespaces-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/bootstrap-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-cluster-authentication-info-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-legacy-token-tracking-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/start-kube-aggregator-informers ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-registration-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-status-available-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-wait-for-first-sync ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/kube-apiserver-autoregistration ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]autoregister-completion ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapi-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-openapiv3-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]poststarthook/apiservice-discovery-controller ok Aug 13 20:01:56 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:56 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:56 crc kubenswrapper[4183]: I0813 20:01:56.188201 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.615874 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.616124 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.616274 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:01:57 crc kubenswrapper[4183]: I0813 20:01:57.705528 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.104674 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]log ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:01:58 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:01:58 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:01:58 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:01:58 crc kubenswrapper[4183]: readyz check failed Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.104897 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:01:58 crc kubenswrapper[4183]: I0813 20:01:58.249211 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podStartSLOduration=81.249140383 podStartE2EDuration="1m21.249140383s" podCreationTimestamp="2025-08-13 20:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:01:58.246314082 +0000 UTC m=+1084.938978760" watchObservedRunningTime="2025-08-13 20:01:58.249140383 +0000 UTC m=+1084.941805101" Aug 13 20:01:59 crc kubenswrapper[4183]: I0813 20:01:59.540096 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:01:59 crc kubenswrapper[4183]: I0813 20:01:59.540175 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.577590 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.729112 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:00 crc kubenswrapper[4183]: I0813 20:02:00.729322 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:01 crc kubenswrapper[4183]: I0813 20:02:01.333608 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:01 crc kubenswrapper[4183]: I0813 20:02:01.334488 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281117 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]log ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:02:03 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:02:03 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:02:03 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:02:03 crc kubenswrapper[4183]: readyz check failed Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281331 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.281457 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:03 crc kubenswrapper[4183]: I0813 20:02:03.477433 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:04 crc kubenswrapper[4183]: I0813 20:02:04.871283 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:04 crc kubenswrapper[4183]: I0813 20:02:04.871391 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:05 crc kubenswrapper[4183]: I0813 20:02:05.052147 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:05 crc kubenswrapper[4183]: I0813 20:02:05.052528 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:07 crc kubenswrapper[4183]: I0813 20:02:07.615652 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:07 crc kubenswrapper[4183]: I0813 20:02:07.617086 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:09 crc kubenswrapper[4183]: I0813 20:02:09.539284 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:09 crc kubenswrapper[4183]: I0813 20:02:09.539527 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:10 crc kubenswrapper[4183]: I0813 20:02:10.729873 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:10 crc kubenswrapper[4183]: I0813 20:02:10.729972 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:13 crc kubenswrapper[4183]: I0813 20:02:13.884598 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:13 crc kubenswrapper[4183]: I0813 20:02:13.891375 4183 generic.go:334] "Generic (PLEG): container finished" podID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" containerID="a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" exitCode=137 Aug 13 20:02:14 crc kubenswrapper[4183]: I0813 20:02:14.871947 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:14 crc kubenswrapper[4183]: I0813 20:02:14.872055 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044158 4183 patch_prober.go:28] interesting pod/apiserver-69c565c9b6-vbdpd container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]log ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:02:15 crc kubenswrapper[4183]: [-]etcd-readiness failed: reason withheld Aug 13 20:02:15 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartOAuthInformer ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartUserInformer ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Aug 13 20:02:15 crc kubenswrapper[4183]: [+]shutdown ok Aug 13 20:02:15 crc kubenswrapper[4183]: readyz check failed Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044241 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.044717 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.053155 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.053264 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.105045 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908592 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908860 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" exitCode=1 Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.908964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2"} Aug 13 20:02:15 crc kubenswrapper[4183]: I0813 20:02:15.910700 4183 scope.go:117] "RemoveContainer" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" Aug 13 20:02:17 crc kubenswrapper[4183]: I0813 20:02:17.616356 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:17 crc kubenswrapper[4183]: I0813 20:02:17.616544 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:19 crc kubenswrapper[4183]: I0813 20:02:19.539668 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:19 crc kubenswrapper[4183]: I0813 20:02:19.540042 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:20 crc kubenswrapper[4183]: I0813 20:02:20.730015 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:20 crc kubenswrapper[4183]: I0813 20:02:20.730523 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.122979 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123459 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" containerID="cri-o://7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123664 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123708 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123747 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.123873 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" gracePeriod=15 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127333 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127486 4183 topology_manager.go:215] "Topology Admit Handler" podUID="48128e8d38b5cbcd2691da698bd9cac3" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127694 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127710 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127721 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="setup" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127729 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="setup" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127742 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127750 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127763 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127770 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127864 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127876 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127900 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127912 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127925 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127932 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127943 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127952 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127962 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127970 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127979 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.127987 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.127996 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128003 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128154 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128165 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128178 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128187 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128197 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" containerName="pruner" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128208 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128220 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128228 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128235 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128246 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-insecure-readyz" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.128466 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128480 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.128492 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128500 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128688 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.128704 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c1db1508241fbac1bedf9130341ffe" containerName="kube-apiserver-check-endpoints" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.133575 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.133659 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bf055e84f32193b9c1c21b0c34a61f01" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.134289 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158390 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158498 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.158611 4183 topology_manager.go:215] "Topology Admit Handler" podUID="92b2a8634cfe8a21cffcc98cc8c87160" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159084 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159105 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159116 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159124 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159135 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="wait-for-host-port" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159142 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="wait-for-host-port" Aug 13 20:02:21 crc kubenswrapper[4183]: E0813 20:02:21.159158 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159170 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159295 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159313 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.159323 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160382 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler" containerID="cri-o://51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160501 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-recovery-controller" containerID="cri-o://7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.160637 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="631cdb37fbb54e809ecc5e719aebd371" containerName="kube-scheduler-cert-syncer" containerID="cri-o://e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" gracePeriod=30 Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304205 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304341 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304373 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304395 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304438 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304469 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304508 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304547 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304579 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.304617 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406097 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406182 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406209 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406246 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406269 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406296 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406324 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406356 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406348 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406385 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.406426 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407344 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407523 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407600 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407640 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407669 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407700 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407732 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.407761 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.976484 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:21 crc kubenswrapper[4183]: I0813 20:02:21.979513 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" exitCode=2 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.004564 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.007470 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011128 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" exitCode=0 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011262 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" exitCode=0 Aug 13 20:02:22 crc kubenswrapper[4183]: I0813 20:02:22.011369 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" exitCode=2 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.023749 4183 generic.go:334] "Generic (PLEG): container finished" podID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerID="7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763" exitCode=0 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.023924 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerDied","Data":"7be671fc50422e885dbb1fec6a6c30037cba5481e39185347522a94f177d9763"} Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.029132 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.031121 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" exitCode=0 Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.036474 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.039619 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:23 crc kubenswrapper[4183]: I0813 20:02:23.040716 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.050510 4183 generic.go:334] "Generic (PLEG): container finished" podID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerID="c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.050563 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerDied","Data":"c790588ca0e77460d01591ce4be738641e9b039fdf1cb3c3fdd37a9243b55f83"} Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.058308 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.064708 4183 generic.go:334] "Generic (PLEG): container finished" podID="631cdb37fbb54e809ecc5e719aebd371" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" exitCode=0 Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.503045 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.507980 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.871920 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:24 crc kubenswrapper[4183]: I0813 20:02:24.872057 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:25 crc kubenswrapper[4183]: I0813 20:02:25.054230 4183 patch_prober.go:28] interesting pod/route-controller-manager-846977c6bc-7gjhh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:25 crc kubenswrapper[4183]: I0813 20:02:25.054353 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:27 crc kubenswrapper[4183]: I0813 20:02:27.616315 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Aug 13 20:02:27 crc kubenswrapper[4183]: I0813 20:02:27.616946 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Aug 13 20:02:29 crc kubenswrapper[4183]: I0813 20:02:29.539666 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:29 crc kubenswrapper[4183]: I0813 20:02:29.539760 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:30 crc kubenswrapper[4183]: I0813 20:02:30.729509 4183 patch_prober.go:28] interesting pod/controller-manager-78589965b8-vmcwt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout" start-of-body= Aug 13 20:02:30 crc kubenswrapper[4183]: I0813 20:02:30.730239 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: i/o timeout" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.144042 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.429055 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.431535 4183 generic.go:334] "Generic (PLEG): container finished" podID="53c1db1508241fbac1bedf9130341ffe" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" exitCode=0 Aug 13 20:02:31 crc kubenswrapper[4183]: E0813 20:02:31.866061 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.891324 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.894905 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.897310 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.898116 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.898976 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.902627 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.912313 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.919507 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.923328 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.925575 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.927066 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.937900 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.939973 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.942267 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.945280 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.949082 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.953861 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.954953 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.956319 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.959661 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.960501 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.962225 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.963159 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.964075 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.967216 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.969357 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.974407 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.976307 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.978201 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.979062 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.981029 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.983325 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.984602 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.985322 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.986095 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.986957 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:31 crc kubenswrapper[4183]: I0813 20:02:31.988177 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.271926 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.272938 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.274592 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.275658 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.276688 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:32 crc kubenswrapper[4183]: I0813 20:02:32.276739 4183 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.277635 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.480426 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 20:02:32 crc kubenswrapper[4183]: E0813 20:02:32.886290 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 20:02:33 crc kubenswrapper[4183]: E0813 20:02:33.131135 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474262 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474347 4183 generic.go:334] "Generic (PLEG): container finished" podID="79050916-d488-4806-b556-1b0078b31e53" containerID="f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721" exitCode=1 Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.474548 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerDied","Data":"f3271fa1efff9a0885965f0ea8ca31234ba9caefd85007392c549bd273427721"} Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.476760 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.478490 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.479453 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.480291 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.481111 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.483928 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.485227 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.485599 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.486055 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.487543 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488271 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" exitCode=255 Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488325 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a"} Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.488552 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.491152 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.491753 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.492511 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.493378 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.630867 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.635107 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.637395 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.640083 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.640704 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.642214 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.643737 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.644623 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.645209 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.649266 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.649862 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.650680 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.651510 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.653001 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.654423 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.656190 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.658048 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.659026 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.659894 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.660903 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.661440 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.663152 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.665048 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.665610 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.666446 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.667012 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.667883 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.669062 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.669996 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.670695 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.672064 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.673439 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.675534 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: E0813 20:02:33.693056 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.776134 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.779418 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.780020 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.780612 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.781261 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.782027 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.782951 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.784489 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.785098 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.785578 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.786645 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.787280 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.787737 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.788288 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.788949 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.789443 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.790632 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.814858 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817226 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817278 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817336 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817359 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817431 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817456 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817484 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817511 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") pod \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\" (UID: \"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.817533 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") pod \"00d32440-4cce-4609-96f3-51ac94480aab\" (UID: \"00d32440-4cce-4609-96f3-51ac94480aab\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.823308 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.824086 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.824283 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca" (OuterVolumeSpecName: "client-ca") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.829595 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.831321 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.831916 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.832096 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config" (OuterVolumeSpecName: "config") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.839907 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.842529 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849603 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5" (OuterVolumeSpecName: "kube-api-access-hqzj5") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "kube-api-access-hqzj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849899 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849943 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.849964 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.850010 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.853018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.855311 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config" (OuterVolumeSpecName: "config") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.857175 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00d32440-4cce-4609-96f3-51ac94480aab" (UID: "00d32440-4cce-4609-96f3-51ac94480aab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.857435 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq" (OuterVolumeSpecName: "kube-api-access-5hdnq") pod "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" (UID: "ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d"). InnerVolumeSpecName "kube-api-access-5hdnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.854495 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.858698 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.859277 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.860308 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.861870 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.867239 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.869475 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.877766 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.878742 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.880544 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.881319 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.952876 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.952928 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953030 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953060 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") pod \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\" (UID: \"2ad657a4-8b02-4373-8d0d-b0e25345dc90\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953117 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953202 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") pod \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\" (UID: \"b57cce81-8ea0-4c4d-aae1-ee024d201c15\") " Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953432 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5hdnq\" (UniqueName: \"kubernetes.io/projected/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-kube-api-access-5hdnq\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953448 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d32440-4cce-4609-96f3-51ac94480aab-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953461 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hqzj5\" (UniqueName: \"kubernetes.io/projected/00d32440-4cce-4609-96f3-51ac94480aab-kube-api-access-hqzj5\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953475 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d32440-4cce-4609-96f3-51ac94480aab-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953486 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953865 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.953916 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock" (OuterVolumeSpecName: "var-lock") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.954018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock" (OuterVolumeSpecName: "var-lock") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.954018 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.962464 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b57cce81-8ea0-4c4d-aae1-ee024d201c15" (UID: "b57cce81-8ea0-4c4d-aae1-ee024d201c15"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:33 crc kubenswrapper[4183]: I0813 20:02:33.965156 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ad657a4-8b02-4373-8d0d-b0e25345dc90" (UID: "2ad657a4-8b02-4373-8d0d-b0e25345dc90"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054315 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054364 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054379 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054393 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad657a4-8b02-4373-8d0d-b0e25345dc90-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054406 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.054418 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b57cce81-8ea0-4c4d-aae1-ee024d201c15-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496521 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ad657a4-8b02-4373-8d0d-b0e25345dc90","Type":"ContainerDied","Data":"9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496557 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.496587 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b70547ed21fdd52e8499a4a8257b914c8e7ffca7487e1b746ab6e52f3ad42e8" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.497818 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.498384 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.499163 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.500878 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.501436 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.502971 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504043 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504689 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" event={"ID":"00d32440-4cce-4609-96f3-51ac94480aab","Type":"ContainerDied","Data":"97945bb2ed21e57bfdbc9492cf4d12c73fca9904379ba3b00d1adaaec35574f9"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.504911 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.510569 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.511628 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.512494 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.513769 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515004 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515683 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515687 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"b57cce81-8ea0-4c4d-aae1-ee024d201c15","Type":"ContainerDied","Data":"639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.515875 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639e0e9093fe7c92ed967648091e3738a0b9f70f4bdb231708a7ad902081cdab" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.517041 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.518184 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.519256 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.520329 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.521510 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.522740 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.522921 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.523083 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" event={"ID":"ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d","Type":"ContainerDied","Data":"7b8bdc9f188dc335dab87669dac72f597c63109a9725099d338fac6691b46d6e"} Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.523679 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.524237 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.525267 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.533218 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.535188 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.537986 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.538638 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.539522 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.540650 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.541552 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.542377 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.543332 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.546395 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.547282 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.548264 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.549312 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.550070 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.550576 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.551271 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.553470 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.554170 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.555246 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.556157 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.556904 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.557767 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.564338 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.567869 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.568709 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.569440 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.570700 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.571439 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.572174 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.573967 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.576134 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.577151 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.577686 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.578274 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.578869 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.579466 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.580407 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.583300 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.584394 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.585512 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.587040 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.587641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.588412 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.871918 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:34 crc kubenswrapper[4183]: I0813 20:02:34.872067 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.956115 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.956951 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.957575 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.958710 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.959960 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:34 crc kubenswrapper[4183]: E0813 20:02:34.960004 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.217194 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.218923 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.219565 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.221954 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.223049 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.224121 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.224713 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.225338 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.226106 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.227234 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.228098 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.229299 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.230995 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.231916 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.232540 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: I0813 20:02:35.233328 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:35 crc kubenswrapper[4183]: E0813 20:02:35.295244 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 20:02:38 crc kubenswrapper[4183]: E0813 20:02:38.497532 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.539274 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.539381 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.971048 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.976426 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.980409 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.983726 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.986091 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.993431 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.996708 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.996719 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:39 crc kubenswrapper[4183]: I0813 20:02:39.999005 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.005357 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.009100 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.009423 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.012959 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.013871 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.014300 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.015256 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.016421 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.017243 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.017766 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.020040 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.021231 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.023635 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.023942 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.024124 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.025519 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.029754 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.031242 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.032249 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.033030 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034299 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034354 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.034382 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.035124 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.036126 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.036459 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.037488 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.038454 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.039382 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.040466 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.041496 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.042642 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.043611 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.044625 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.045613 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.047488 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.049417 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.050515 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.051643 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.053272 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.057935 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.061766 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.062904 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.063535 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.064534 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.066270 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.067941 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.068702 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.070618 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.071518 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.073352 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.075716 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.077205 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.079158 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.084023 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.086202 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.088068 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.089629 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090453 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") pod \"631cdb37fbb54e809ecc5e719aebd371\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090596 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") pod \"631cdb37fbb54e809ecc5e719aebd371\" (UID: \"631cdb37fbb54e809ecc5e719aebd371\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090899 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "631cdb37fbb54e809ecc5e719aebd371" (UID: "631cdb37fbb54e809ecc5e719aebd371"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.090942 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "631cdb37fbb54e809ecc5e719aebd371" (UID: "631cdb37fbb54e809ecc5e719aebd371"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.092038 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093259 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093311 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/631cdb37fbb54e809ecc5e719aebd371-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.093608 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.193911 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.193988 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194026 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194058 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194094 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194121 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194161 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194187 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194206 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194228 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194249 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194277 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194297 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194324 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194346 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194382 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194409 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194436 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194711 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194747 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194821 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194884 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194926 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194946 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") pod \"53c1db1508241fbac1bedf9130341ffe\" (UID: \"53c1db1508241fbac1bedf9130341ffe\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194967 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.194991 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195019 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195045 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") pod \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\" (UID: \"23eb88d6-6aea-4542-a2b9-8f3fd106b4ab\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195075 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195096 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") pod \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\" (UID: \"b233d916-bfe3-4ae5-ae39-6b574d1aa05e\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195119 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") pod \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\" (UID: \"42b6a393-6194-4620-bf8f-7e4b6cbe5679\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195148 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") pod \"79050916-d488-4806-b556-1b0078b31e53\" (UID: \"79050916-d488-4806-b556-1b0078b31e53\") " Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195296 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock" (OuterVolumeSpecName: "var-lock") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195599 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.195961 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config" (OuterVolumeSpecName: "config") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.196677 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.196746 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197177 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197289 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit" (OuterVolumeSpecName: "audit") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.197696 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.198116 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.198903 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199238 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199301 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "53c1db1508241fbac1bedf9130341ffe" (UID: "53c1db1508241fbac1bedf9130341ffe"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199638 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.199721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca" (OuterVolumeSpecName: "service-ca") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.200026 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.202489 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.204030 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config" (OuterVolumeSpecName: "console-config") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.208569 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9" (OuterVolumeSpecName: "kube-api-access-r8qj9") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "kube-api-access-r8qj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.218292 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.220721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.221921 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.227524 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.227679 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.228713 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss" (OuterVolumeSpecName: "kube-api-access-4f9ss") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "kube-api-access-4f9ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.229019 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.229133 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" (UID: "23eb88d6-6aea-4542-a2b9-8f3fd106b4ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.231737 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "79050916-d488-4806-b556-1b0078b31e53" (UID: "79050916-d488-4806-b556-1b0078b31e53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.236227 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.237452 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "registry-storage") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.238634 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.239584 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "42b6a393-6194-4620-bf8f-7e4b6cbe5679" (UID: "42b6a393-6194-4620-bf8f-7e4b6cbe5679"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.241981 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh" (OuterVolumeSpecName: "kube-api-access-lz9qh") pod "b233d916-bfe3-4ae5-ae39-6b574d1aa05e" (UID: "b233d916-bfe3-4ae5-ae39-6b574d1aa05e"). InnerVolumeSpecName "kube-api-access-lz9qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297045 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-client\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297115 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297139 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297153 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297170 4183 reconciler_common.go:300] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297185 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297199 4183 reconciler_common.go:300] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-image-import-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297212 4183 reconciler_common.go:300] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297227 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297240 4183 reconciler_common.go:300] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-encryption-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297254 4183 reconciler_common.go:300] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297271 4183 reconciler_common.go:300] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297288 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4f9ss\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-kube-api-access-4f9ss\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297304 4183 reconciler_common.go:300] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297318 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/53c1db1508241fbac1bedf9130341ffe-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297333 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-trusted-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297347 4183 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-tls\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297364 4183 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42b6a393-6194-4620-bf8f-7e4b6cbe5679-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297398 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r8qj9\" (UniqueName: \"kubernetes.io/projected/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-kube-api-access-r8qj9\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297413 4183 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42b6a393-6194-4620-bf8f-7e4b6cbe5679-bound-sa-token\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297429 4183 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42b6a393-6194-4620-bf8f-7e4b6cbe5679-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297444 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297458 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/79050916-d488-4806-b556-1b0078b31e53-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297472 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297485 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297501 4183 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42b6a393-6194-4620-bf8f-7e4b6cbe5679-registry-certificates\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297515 4183 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297529 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79050916-d488-4806-b556-1b0078b31e53-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297542 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297559 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lz9qh\" (UniqueName: \"kubernetes.io/projected/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-kube-api-access-lz9qh\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.297573 4183 reconciler_common.go:300] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b233d916-bfe3-4ae5-ae39-6b574d1aa05e-console-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588367 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84fccc7b6-mkncc_b233d916-bfe3-4ae5-ae39-6b574d1aa05e/console/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588554 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84fccc7b6-mkncc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.588685 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84fccc7b6-mkncc" event={"ID":"b233d916-bfe3-4ae5-ae39-6b574d1aa05e","Type":"ContainerDied","Data":"e6ed8c1e93f8bc476d05eff439933a75e91865b1b913300d2de272ffc970fd9f"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.591107 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.592722 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.593893 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.596348 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.598081 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.598716 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_631cdb37fbb54e809ecc5e719aebd371/kube-scheduler-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.599917 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.602294 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.604509 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.605512 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.608720 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.613287 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.614356 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.615596 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.616744 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.617542 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.618533 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-check-endpoints/5.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.624663 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_53c1db1508241fbac1bedf9130341ffe/kube-apiserver-cert-syncer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.626103 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.628269 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.629763 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.630956 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.632720 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.633709 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.634588 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.643673 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.644669 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.647267 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.649110 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.650116 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.650878 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.652045 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" event={"ID":"42b6a393-6194-4620-bf8f-7e4b6cbe5679","Type":"ContainerDied","Data":"958ba1ee8e9afa1cbcf49a3010aa63c2343b2e7ad70d6958e858075ed46bd0f4"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.655957 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_79050916-d488-4806-b556-1b0078b31e53/installer/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656491 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656635 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"79050916-d488-4806-b556-1b0078b31e53","Type":"ContainerDied","Data":"c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc"} Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.656685 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d98545d20b61052f0164d192095269601cf3a013453289a4380b9d437de8fc" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.658394 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.661451 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.662678 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-mtx25_23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/openshift-apiserver/0.log" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.662727 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.663485 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.664381 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.665472 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.666156 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.667619 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.677670 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.679546 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.681101 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.683452 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.684923 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.686295 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.687519 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.688643 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.690579 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.692375 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.695015 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.710178 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.715430 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.717448 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.720003 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.721741 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.722877 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.723600 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.724325 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.725055 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.725735 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.728397 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.731248 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.738267 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.740283 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.742713 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.743524 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.747326 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.748566 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.749716 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.754477 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.755827 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.756452 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.757134 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.757716 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.758331 4183 status_manager.go:853] "Failed to get status for pod" podUID="631cdb37fbb54e809ecc5e719aebd371" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.759046 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.759607 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.760155 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.760650 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.761316 4183 status_manager.go:853] "Failed to get status for pod" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" pod="openshift-apiserver/apiserver-67cbf64bc9-mtx25" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-mtx25\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.761945 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.762517 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.763554 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.764555 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.765964 4183 status_manager.go:853] "Failed to get status for pod" podUID="53c1db1508241fbac1bedf9130341ffe" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.767552 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:40 crc kubenswrapper[4183]: I0813 20:02:40.770117 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.220590 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eb88d6-6aea-4542-a2b9-8f3fd106b4ab" path="/var/lib/kubelet/pods/23eb88d6-6aea-4542-a2b9-8f3fd106b4ab/volumes" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.223978 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c1db1508241fbac1bedf9130341ffe" path="/var/lib/kubelet/pods/53c1db1508241fbac1bedf9130341ffe/volumes" Aug 13 20:02:41 crc kubenswrapper[4183]: I0813 20:02:41.228241 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631cdb37fbb54e809ecc5e719aebd371" path="/var/lib/kubelet/pods/631cdb37fbb54e809ecc5e719aebd371/volumes" Aug 13 20:02:42 crc kubenswrapper[4183]: I0813 20:02:42.615716 4183 patch_prober.go:28] interesting pod/image-registry-7cbd5666ff-bbfrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Aug 13 20:02:42 crc kubenswrapper[4183]: I0813 20:02:42.615907 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Aug 13 20:02:43 crc kubenswrapper[4183]: E0813 20:02:43.133995 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:44 crc kubenswrapper[4183]: I0813 20:02:44.871378 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:44 crc kubenswrapper[4183]: I0813 20:02:44.872024 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:44 crc kubenswrapper[4183]: E0813 20:02:44.899307 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.134320 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.136079 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.137078 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.138687 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.140025 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: E0813 20:02:45.140097 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.213624 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.215267 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.218619 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.221977 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.222751 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.223611 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.224466 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.225551 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.226547 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.227405 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.229145 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.229898 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.230641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.231662 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.232379 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.233232 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:45 crc kubenswrapper[4183]: I0813 20:02:45.234537 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.208317 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.210866 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.211828 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.212948 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.213960 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.214838 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.216124 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.217011 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.218117 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.219027 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.220223 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.221319 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.222379 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.223687 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.225764 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.226823 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.227763 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.228582 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.229549 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.229580 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:02:46 crc kubenswrapper[4183]: E0813 20:02:46.230413 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:46 crc kubenswrapper[4183]: I0813 20:02:46.231018 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.208426 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.212466 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.213743 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.215143 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.216187 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.216927 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.218184 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.219320 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.220300 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.221351 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.223186 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.223737 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.224717 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.227581 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.228651 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.229338 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.229363 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.230133 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: E0813 20:02:49.230266 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.230940 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.231155 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.232316 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.539512 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:49 crc kubenswrapper[4183]: I0813 20:02:49.539728 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:02:51 crc kubenswrapper[4183]: E0813 20:02:51.901264 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:53 crc kubenswrapper[4183]: E0813 20:02:53.137504 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707203 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707366 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707420 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707468 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707503 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Pending" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.707532 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.872090 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:02:54 crc kubenswrapper[4183]: I0813 20:02:54.872231 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.219044 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.220296 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.222133 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.223240 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.224009 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.224820 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.226944 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.228494 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.230011 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.231203 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.231769 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.232434 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.233162 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.234290 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.239215 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.240931 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.242716 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.244399 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: I0813 20:02:55.245681 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.336066 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.337683 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.340507 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.341480 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.342210 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:02:55 crc kubenswrapper[4183]: E0813 20:02:55.342229 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:02:58 crc kubenswrapper[4183]: E0813 20:02:58.904133 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:02:59 crc kubenswrapper[4183]: I0813 20:02:59.541340 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:02:59 crc kubenswrapper[4183]: I0813 20:02:59.541485 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:03 crc kubenswrapper[4183]: E0813 20:03:03.139563 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:04 crc kubenswrapper[4183]: I0813 20:03:04.871666 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:04 crc kubenswrapper[4183]: I0813 20:03:04.871934 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.210563 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.211517 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.212300 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.213267 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.214501 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.215662 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.217155 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.218226 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.219282 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.220280 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.221003 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.221764 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.222425 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.223649 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.224408 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.225165 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.226077 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.226826 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: I0813 20:03:05.227494 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.444295 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.445355 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.446196 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.447314 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.448427 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.448472 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:05 crc kubenswrapper[4183]: E0813 20:03:05.908710 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:09 crc kubenswrapper[4183]: I0813 20:03:09.540596 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:09 crc kubenswrapper[4183]: I0813 20:03:09.540878 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.947144 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.948536 4183 generic.go:334] "Generic (PLEG): container finished" podID="51a02bbf-2d40-4f84-868a-d399ea18a846" containerID="91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f" exitCode=1 Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.948600 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerDied","Data":"91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f"} Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.950159 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.950921 4183 scope.go:117] "RemoveContainer" containerID="91607aba35220cb93c0858cc3bcb38626d5aa71ea1bc663b3f532829d3c8174f" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.951127 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.952515 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.953682 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.954986 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.956447 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.957937 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.959092 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.961099 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.962411 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.962999 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.963527 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.964159 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.965230 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.966427 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.967529 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.970578 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.971704 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.972739 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:10 crc kubenswrapper[4183]: I0813 20:03:10.973474 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:12 crc kubenswrapper[4183]: E0813 20:03:12.913055 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:13 crc kubenswrapper[4183]: E0813 20:03:13.142309 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:14 crc kubenswrapper[4183]: I0813 20:03:14.873139 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:14 crc kubenswrapper[4183]: I0813 20:03:14.873303 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.214539 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.215659 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.217023 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.218560 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.219446 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.220423 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.221418 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.222705 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.223572 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.224623 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.225457 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.226282 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.227309 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.227988 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.228621 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.230261 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.235597 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.236756 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.238064 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: I0813 20:03:15.239153 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.649213 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.650252 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.651715 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.652691 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.653510 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:15 crc kubenswrapper[4183]: E0813 20:03:15.653526 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:19 crc kubenswrapper[4183]: I0813 20:03:19.540153 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:19 crc kubenswrapper[4183]: I0813 20:03:19.540272 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:19 crc kubenswrapper[4183]: E0813 20:03:19.915210 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:22 crc kubenswrapper[4183]: E0813 20:03:22.278613 4183 desired_state_of_world_populator.go:320] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" volumeName="registry-storage" Aug 13 20:03:23 crc kubenswrapper[4183]: E0813 20:03:23.144835 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.609959 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.610356 4183 kuberuntime_manager.go:1262] container &Container{Name:console,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae,Command:[/opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-config.yaml --service-ca-file=/var/service-ca/service-ca.crt --v=2],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:console-serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-oauth-config,ReadOnly:true,MountPath:/var/oauth-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-config,ReadOnly:true,MountPath:/var/console-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:service-ca,ReadOnly:true,MountPath:/var/service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:oauth-serving-cert,ReadOnly:true,MountPath:/var/oauth-serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2nz92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[sleep 25],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000590000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod console-644bb77b49-5x5xk_openshift-console(9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1): CreateContainerError: context deadline exceeded Aug 13 20:03:24 crc kubenswrapper[4183]: E0813 20:03:24.610451 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Aug 13 20:03:24 crc kubenswrapper[4183]: I0813 20:03:24.872084 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:24 crc kubenswrapper[4183]: I0813 20:03:24.872210 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.047833 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.049084 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.050205 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.051015 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.051935 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.052827 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.053835 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.054432 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.055227 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.055950 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.056836 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.057551 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.058188 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.058752 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.059343 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.059963 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.060567 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.061288 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.061997 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.062546 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.063426 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.212956 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.214088 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.215231 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.216167 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.217076 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.218506 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.219432 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.220191 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.221977 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.226475 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.227704 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.229071 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.229894 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.230754 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.231917 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.232972 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.233637 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.234455 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.235441 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.236316 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: I0813 20:03:25.237150 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422534 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422867 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_openshift-kube-scheduler-operator(71af81a9-7d43-49b2-9287-c375900aa905): CreateContainerError: context deadline exceeded Aug 13 20:03:25 crc kubenswrapper[4183]: E0813 20:03:25.422934 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.008298 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.009152 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.009639 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010249 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010877 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.010914 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.052150 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.053483 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.055448 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.056550 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.057467 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.058261 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.059259 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.060223 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.061058 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.061933 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.062691 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.063579 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.064438 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.065181 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.065991 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.066908 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.067756 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.068570 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.069641 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.071225 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.072344 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.073650 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: I0813 20:03:26.074939 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:26 crc kubenswrapper[4183]: E0813 20:03:26.917366 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.231826 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.232062 4183 kuberuntime_manager.go:1262] container &Container{Name:cluster-image-registry-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,Command:[],Args:[--files=/var/run/configmaps/trusted-ca/tls-ca-bundle.pem --files=/etc/secrets/tls.crt --files=/etc/secrets/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cluster-image-registry-operator,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8,ValueFrom:nil,},EnvVar{Name:IMAGE_PRUNER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:AZURE_ENVIRONMENT_FILEPATH,Value:/tmp/azurestackcloud.json,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:trusted-ca,ReadOnly:false,MountPath:/var/run/configmaps/trusted-ca/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:image-registry-operator-tls,ReadOnly:false,MountPath:/etc/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9x6dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000290000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-image-registry-operator-7769bd8d7d-q5cvv_openshift-image-registry(b54e8941-2fc4-432a-9e51-39684df9089e): CreateContainerError: context deadline exceeded Aug 13 20:03:27 crc kubenswrapper[4183]: E0813 20:03:27.232162 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-image-registry-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.067346 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.067614 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.068524 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.069916 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.070591 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.071345 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.072227 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.073426 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.074561 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.075600 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.076508 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.077389 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.078278 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.078943 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.079522 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.080234 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.080923 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.081510 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.082587 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.085724 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.088098 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.089261 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:28 crc kubenswrapper[4183]: I0813 20:03:28.089892 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:29 crc kubenswrapper[4183]: I0813 20:03:29.540064 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:29 crc kubenswrapper[4183]: I0813 20:03:29.540268 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.361546 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58" Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.362141 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-apiserver-check-endpoints,Image:quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69,Command:[cluster-kube-apiserver-operator check-endpoints],Args:[--listen 0.0.0.0:17698 --namespace $(POD_NAMESPACE) --v 2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:check-endpoints,HostPort:0,ContainerPort:17698,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6j2kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5): CreateContainerError: context deadline exceeded Aug 13 20:03:31 crc kubenswrapper[4183]: E0813 20:03:31.362199 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.110567 4183 scope.go:117] "RemoveContainer" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.112827 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.114285 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.115013 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.115521 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.116366 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.117287 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.118542 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.119645 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.120606 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.121994 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.123110 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.125717 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.126669 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.127456 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.128200 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.128897 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.131474 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.132164 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.132706 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.134032 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.134677 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.135378 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.136175 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.804096 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.804196 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78c28c3dccb095318f195e1d81c6ec26e3a25cfb361d9aa9942e4d8a6f9923b"} err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Aug 13 20:03:32 crc kubenswrapper[4183]: I0813 20:03:32.804225 4183 scope.go:117] "RemoveContainer" containerID="c206967f2892cfc5d9ca27cc94cd1d42b6561839a6724e931bbdea13b6e1cde5" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.955395 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.955915 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-controller-manager-operator,Image:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d6201c776053346ebce8f90c34797a7a7c05898008e17f3ba9673f5f14507b0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-6f6cb54958-rbddb_openshift-kube-controller-manager-operator(c1620f19-8aa3-45cf-931b-7ae0e5cd14cf): CreateContainerError: context deadline exceeded Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.956046 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.957927 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.958531 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8dcvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-77658b5b66-dq5sc_openshift-config-operator(530553aa-0a1d-423e-8a22-f5eb4bdbb883): CreateContainerError: context deadline exceeded Aug 13 20:03:32 crc kubenswrapper[4183]: E0813 20:03:32.958662 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.139999 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.143820 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:03:33 crc kubenswrapper[4183]: E0813 20:03:33.146579 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.146712 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.148155 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.152577 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.154245 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.156673 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.160183 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.163263 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.164587 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.165673 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.166966 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.167635 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.170179 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.171476 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.179570 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.180585 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.181576 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.182543 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.184442 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.185063 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.185589 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.186180 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.186691 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.187497 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.188824 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.192558 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.193641 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.195080 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.195730 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.197338 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.198623 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.200950 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.201666 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.202457 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.204072 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.205686 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.207140 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.208048 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.209113 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.209910 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.210405 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.211084 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.211709 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.212357 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.213086 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.213621 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: I0813 20:03:33.214235 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:33 crc kubenswrapper[4183]: E0813 20:03:33.919739 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:34 crc kubenswrapper[4183]: I0813 20:03:34.872349 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:34 crc kubenswrapper[4183]: I0813 20:03:34.872962 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.211419 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.212210 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.213376 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.214993 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.216000 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.217673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.220219 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.223477 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.224896 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.226685 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.234192 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.235357 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.237326 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.239180 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.240549 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.241331 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.242495 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.243645 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.244446 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.245583 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.247018 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.247945 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.249169 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.665696 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.665987 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.666063 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:35 crc kubenswrapper[4183]: I0813 20:03:35.666083 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.259121 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.260281 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.261425 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.262254 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.263093 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.263115 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.932530 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.932730 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator(0f394926-bdb9-425c-b36e-264d7fd34550): CreateContainerError: context deadline exceeded Aug 13 20:03:36 crc kubenswrapper[4183]: E0813 20:03:36.933059 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.189487 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.191418 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.192501 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.193612 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.197451 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.199123 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.200252 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.201146 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.201952 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.202673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.203381 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.204067 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.204738 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.205462 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.206116 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.206760 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.207586 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.213151 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.213950 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.215625 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.216425 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.217261 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.218475 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.219215 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:37 crc kubenswrapper[4183]: I0813 20:03:37.221347 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.541123 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.541261 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:39 crc kubenswrapper[4183]: I0813 20:03:39.872486 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.238198 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.238937 4183 kuberuntime_manager.go:1262] container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,Command:[/bin/bash -c #!/bin/bash Aug 13 20:03:40 crc kubenswrapper[4183]: set -o allexport Aug 13 20:03:40 crc kubenswrapper[4183]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Aug 13 20:03:40 crc kubenswrapper[4183]: source /etc/kubernetes/apiserver-url.env Aug 13 20:03:40 crc kubenswrapper[4183]: else Aug 13 20:03:40 crc kubenswrapper[4183]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Aug 13 20:03:40 crc kubenswrapper[4183]: exit 1 Aug 13 20:03:40 crc kubenswrapper[4183]: fi Aug 13 20:03:40 crc kubenswrapper[4183]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Aug 13 20:03:40 crc kubenswrapper[4183]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:SDN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ec002699d6fa111b93b08bda974586ae4018f4a52d1cbfd0995e6dc9c732151,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce3a9355a4497b51899867170943d34bbc2d2b7996d9a002c103797bd828d71b,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0791454224e2ec76fd43916220bd5ae55bf18f37f0cd571cb05c76e1d791453,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc5f4b6565d37bd875cdb42e95372128231218fb8741f640b09565d9dcea2cb1,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4sfhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-767c585db5-zd56b_openshift-network-operator(cc291782-27d2-4a74-af79-c7dcb31535d2): CreateContainerError: context deadline exceeded Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.239006 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-network-operator/network-operator-767c585db5-zd56b" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" Aug 13 20:03:40 crc kubenswrapper[4183]: E0813 20:03:40.921336 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.220155 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.221970 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.223472 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.224279 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.225119 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.225675 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.226532 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.227446 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.228282 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.229134 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.230321 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.231455 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.232479 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.233494 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.235245 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.236420 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.237317 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.238312 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.239691 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.241177 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.242645 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.243418 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.244192 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.244936 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:41 crc kubenswrapper[4183]: I0813 20:03:41.245929 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:43 crc kubenswrapper[4183]: E0813 20:03:43.150624 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431158 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431657 4183 kuberuntime_manager.go:1262] container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d9vhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-546b4f8984-pwccz_openshift-service-ca-operator(6d67253e-2acd-4bc1-8185-793587da4f17): CreateContainerError: context deadline exceeded Aug 13 20:03:44 crc kubenswrapper[4183]: E0813 20:03:44.431702 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 20:03:44 crc kubenswrapper[4183]: I0813 20:03:44.872013 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:44 crc kubenswrapper[4183]: I0813 20:03:44.872130 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.212536 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.214267 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.215631 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.217100 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.219131 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.220211 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.221167 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.222126 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.223070 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.223960 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.224621 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.225944 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.226706 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.227767 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.229005 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.230031 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.231325 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.233490 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.234690 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.235763 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.236752 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.237925 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.238925 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.239881 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.245310 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.245885 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.247417 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.248220 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.249427 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.250417 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.251017 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.251583 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.252204 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.252927 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.253378 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.254047 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.256380 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.257620 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.258610 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.259600 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.260899 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.261555 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.262370 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.263113 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.263691 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.265312 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.266454 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.267516 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.268704 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:45 crc kubenswrapper[4183]: I0813 20:03:45.269974 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.647315 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.648097 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.648578 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649118 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649679 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:46 crc kubenswrapper[4183]: E0813 20:03:46.649721 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:47 crc kubenswrapper[4183]: E0813 20:03:47.924557 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:49 crc kubenswrapper[4183]: I0813 20:03:49.540076 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:49 crc kubenswrapper[4183]: I0813 20:03:49.540191 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.547620 4183 scope.go:117] "RemoveContainer" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.815311 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:51 crc kubenswrapper[4183]: E0813 20:03:51.818337 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.818414 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} err="failed to get container status \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.818438 4183 scope.go:117] "RemoveContainer" containerID="71a0cdc384f9d93ad108bee372da2b3e7dddb9b98c65c36f3ddbf584a54fd830" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.908296 4183 scope.go:117] "RemoveContainer" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.973248 4183 scope.go:117] "RemoveContainer" containerID="417399fd591cd0cade9e86c96a7f4a9443d365dc57f627f00e02594fd8957bf3" Aug 13 20:03:51 crc kubenswrapper[4183]: I0813 20:03:51.999520 4183 scope.go:117] "RemoveContainer" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.136716 4183 scope.go:117] "RemoveContainer" containerID="a4a4a30f20f748c27de48f589b297456dbde26c9c06b9c1e843ce69a376e85a9" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.251946 4183 scope.go:117] "RemoveContainer" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.332974 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.334677 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.334969 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"2eb2b200bca0d10cf0fe16fb7c0caf80","Type":"ContainerStarted","Data":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.347377 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.352028 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.352959 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.353908 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.354585 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.355237 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.355963 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.357058 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.359662 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.360466 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.361210 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.362273 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.363085 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.374354 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.377143 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.379331 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.381240 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.382386 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.383532 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.384450 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.385304 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.386031 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.386926 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.388027 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.389206 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.591196 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594312 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594452 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594543 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.594627 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.595310 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.596506 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.597199 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.598261 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599060 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599094 4183 scope.go:117] "RemoveContainer" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.599826 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601014 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.601085 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": container with ID starting with 7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e not found: ID does not exist" containerID="7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601130 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e"} err="failed to get container status \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": rpc error: code = NotFound desc = could not find container \"7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e\": container with ID starting with 7c6f70befd30b1ee91edc5d76f0aec3248196d4a50e678ee75d7659e70773e3e not found: ID does not exist" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.601144 4183 scope.go:117] "RemoveContainer" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.604198 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.605258 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.606558 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.608023 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.609312 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.610283 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.611159 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.611766 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.612431 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.613495 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.615178 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.616312 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.618643 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.625019 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.626334 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.628113 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.631859 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.645070 4183 scope.go:117] "RemoveContainer" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.650987 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.651253 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.655764 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.657134 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.657711 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.658417 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.659137 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.659921 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.663326 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.667993 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.670400 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.673032 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.675751 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.680620 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.689708 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.691103 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.694349 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.699256 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.703504 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.705175 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.719389 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.724042 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.730489 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.737357 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.740380 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.746116 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.747167 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.815913 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"a3aeac3b3f0abd9616c32591e8c03ee04ad93d9eaa1a57f5f009d1e5534dc9bf"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.836479 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"4df62f5cb9c66f562c10ea184889e69acedbf4f895667310c68697db48fd553b"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.846168 4183 scope.go:117] "RemoveContainer" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": container with ID starting with 51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52 not found: ID does not exist" containerID="51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847236 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52"} err="failed to get container status \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": rpc error: code = NotFound desc = could not find container \"51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52\": container with ID starting with 51acee2d724f92a19086cc99db7e79f254df8a0e9272c1893961ca69a8e49d52 not found: ID does not exist" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847256 4183 scope.go:117] "RemoveContainer" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847353 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-scheduler-cert-syncer_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff'" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: E0813 20:03:52.847399 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-scheduler-cert-syncer_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff'" containerID="e9af88a05768146a45a54a60bd296947e7613d71ef7abe92037c55bb516250ff" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.847419 4183 scope.go:117] "RemoveContainer" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.865429 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"bf055e84f32193b9c1c21b0c34a61f01","Type":"ContainerStarted","Data":"da0d5a4673db72bf057aaca9add937d2dd33d15edccefb4817f17da3759c2927"} Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.884076 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.923425 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.924626 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.925393 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.926622 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.930474 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.931532 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.932827 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.933481 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.934358 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.938533 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.939640 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.941010 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.945088 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.946475 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.956057 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.956738 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.962403 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.970510 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.972115 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.975619 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.978427 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.996070 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:52 crc kubenswrapper[4183]: I0813 20:03:52.997568 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.001222 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.007673 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.153513 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/events/openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{openshift-controller-manager-operator-7978d7d7f6-2nt8z.185b6beb073764b5 openshift-controller-manager-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-7978d7d7f6-2nt8z,UID:0f394926-bdb9-425c-b36e-264d7fd34550,APIVersion:v1,ResourceVersion:23715,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 19:58:56.973497525 +0000 UTC m=+903.666162213,LastTimestamp:2025-08-13 20:01:36.894280615 +0000 UTC m=+1063.586945253,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.161396 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'd1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624'" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.161515 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-crc_openshift-kube-scheduler_631cdb37fbb54e809ecc5e719aebd371_0 in pod sandbox 970bf8339a8e8001b60c124abd60c2b2381265f54d5bcdb460515789626b6ba9 from index: no such id: 'd1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624'" containerID="d1ea920aded19e14b46106b6457550444708a9f585b4113ce718580a8bccc624" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.161545 4183 scope.go:117] "RemoveContainer" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.161685 4183 scope.go:117] "RemoveContainer" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.165607 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": container with ID starting with d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92 not found: ID does not exist" containerID="d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.165661 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92"} err="failed to get container status \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": rpc error: code = NotFound desc = could not find container \"d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92\": container with ID starting with d36c8760a1c19ca1f28d0007a9f2c243c1acee1eb911d56d81ebee03e6400b92 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.165680 4183 scope.go:117] "RemoveContainer" containerID="42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.166373 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf"} err="failed to get container status \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": rpc error: code = NotFound desc = could not find container \"42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf\": container with ID starting with 42b3bb023d6ce32b2b9f8a3891b335978e376af366afe99f4127448549aeb2bf not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.166417 4183 scope.go:117] "RemoveContainer" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.388109 4183 scope.go:117] "RemoveContainer" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.489002 4183 scope.go:117] "RemoveContainer" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.490441 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": container with ID starting with 138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325 not found: ID does not exist" containerID="138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.490514 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325"} err="failed to get container status \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": rpc error: code = NotFound desc = could not find container \"138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325\": container with ID starting with 138c379560167401375d4cc2fb35126ddae83cb27fc75fc2be9ee900a6605325 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.490537 4183 scope.go:117] "RemoveContainer" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.492177 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": container with ID starting with 2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2 not found: ID does not exist" containerID="2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.492257 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2"} err="failed to get container status \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": rpc error: code = NotFound desc = could not find container \"2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2\": container with ID starting with 2625ef135e7faed9c6c22a389ba46318826b6fa488e5892ff60564dfbd4b5ec2 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.492291 4183 scope.go:117] "RemoveContainer" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.554953 4183 scope.go:117] "RemoveContainer" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.558249 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": container with ID starting with fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a not found: ID does not exist" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.558305 4183 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": rpc error: code = NotFound desc = could not find container \"fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a\": container with ID starting with fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a not found: ID does not exist" containerID="fe89df31f5f9e77b8c0a9fdfd0f23f0cd0db17d2be0d39798975bc0835f9701a" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.558335 4183 scope.go:117] "RemoveContainer" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.900996 4183 scope.go:117] "RemoveContainer" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.901228 4183 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-apiserver_kube-apiserver-crc_openshift-kube-apiserver_53c1db1508241fbac1bedf9130341ffe_0 in pod sandbox e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 from index: no such id: '7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5'" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.901273 4183 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-apiserver_kube-apiserver-crc_openshift-kube-apiserver_53c1db1508241fbac1bedf9130341ffe_0 in pod sandbox e09ebdd208d66afb0ba856fe61dfd2ca4a4d9b0d5aab8790984ba43fbfd18d83 from index: no such id: '7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5'" containerID="7dd73eb770167cd66114128ad8dba397505ee9cdc5b0689a61c761c5f2d040d5" Aug 13 20:03:53 crc kubenswrapper[4183]: E0813 20:03:53.914540 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": container with ID starting with f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480 not found: ID does not exist" containerID="f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.914650 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480"} err="failed to get container status \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": rpc error: code = NotFound desc = could not find container \"f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480\": container with ID starting with f18f93dd516534fda669b4711d2c033dfae86dc4cdc8330c6f60ad2686e07480 not found: ID does not exist" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.914676 4183 scope.go:117] "RemoveContainer" containerID="32fd955a56de5925978ca9c74fd5477e1123ae91904669c797c57e09bb337d84" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.985211 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.989633 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.990768 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.992256 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.993070 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.994202 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.995251 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.997368 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.998538 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.999235 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:53 crc kubenswrapper[4183]: I0813 20:03:53.999727 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.000364 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.000917 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.001581 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.005208 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.006195 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.006867 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.007503 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.008135 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.010212 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.012267 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.013308 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.014224 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.015561 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.017215 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.018054 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.018769 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.034042 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.050142 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.067978 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"bf055e84f32193b9c1c21b0c34a61f01","Type":"ContainerStarted","Data":"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.070249 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.071425 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.073964 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.077460 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.078588 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.081030 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.082476 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.084958 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7xghp_51a02bbf-2d40-4f84-868a-d399ea18a846/approver/0.log" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.086267 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"e302077a679b703dfa8553f1ea474302e86cc72bc23b53926bdc62ce33df0f64"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.088211 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.094913 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.097251 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.102324 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.102620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.103968 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.106639 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.113311 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.116123 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.118679 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.123027 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.124242 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.125181 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.125924 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.126600 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.128062 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.129082 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.129903 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.135059 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.136239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.138270 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.152304 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.153725 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.155006 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.156625 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.159271 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.164315 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.165074 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.165661 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.166382 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.167048 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.172278 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.176069 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.176915 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.181046 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.183126 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.189981 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.193940 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.198031 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.199125 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.200183 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.205213 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.211008 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.211825 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.222035 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.222627 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.230992 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.233069 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.233933 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.234623 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.235869 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.242517 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.243296 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.245137 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.246348 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.249618 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.250358 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.251385 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.252168 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.252716 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.253704 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.254575 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.255223 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.261472 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.281818 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.287989 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.289834 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.300725 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.321118 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.343664 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.361538 4183 scope.go:117] "RemoveContainer" containerID="850160bdc6ea5ea83ea4c13388d6776a10113289f49f21b1ead74f152e5a1512" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.368418 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.382082 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.408899 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.425935 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.431358 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.436109 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a"} Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.437653 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.439496 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.441269 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.475968 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.481505 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.502828 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.525338 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.629285 4183 scope.go:117] "RemoveContainer" containerID="a9c5c60859fe5965d3e56b1f36415e36c4ebccf094bcf5a836013b9db4262143" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708140 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708286 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708320 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708378 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Pending" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708414 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.708451 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Pending" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.738941 4183 scope.go:117] "RemoveContainer" containerID="b52df8e62a367664028244f096d775f6f9e6f572cd730e4e147620381f6880c3" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875372 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875453 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875544 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: I0813 20:03:54.875464 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:54 crc kubenswrapper[4183]: E0813 20:03:54.928188 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="7s" Aug 13 20:03:54 crc kubenswrapper[4183]: E0813 20:03:54.960376 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92b2a8634cfe8a21cffcc98cc8c87160.slice/crio-dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9.scope\": RecentStats: unable to find data in memory cache]" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.214351 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.218089 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.219195 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.219961 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.224599 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.228904 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.229919 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.231029 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.231920 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.235357 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.236962 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.238438 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.240074 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.241611 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.245553 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.249464 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.251421 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.254160 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.255417 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.256743 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.257566 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.260917 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.264107 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.266770 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.277921 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.279402 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.283013 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.285316 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.290481 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:55 crc kubenswrapper[4183]: I0813 20:03:55.620454 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.621742 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/0.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622475 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" exitCode=255 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622574 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622611 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.622633 4183 scope.go:117] "RemoveContainer" containerID="98e20994b78d70c7d9739afcbef1576151aa009516cab8609a2c74b997bfed1a" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.627053 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.628078 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629064 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629596 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.629704 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerDied","Data":"dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.630399 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.630462 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.632367 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.632479 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.633693 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.648340 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.650425 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.652757 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.655106 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.656986 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.658549 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.658644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerDied","Data":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.660075 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.660097 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.663572 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:55.663943 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.664898 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665145 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665404 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665450 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665467 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.665996 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.666515 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.667399 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.671709 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.676322 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.676514 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.681983 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.682718 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.683350 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.683933 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.684530 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.685091 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.685546 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686263 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686545 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686586 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.686900 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.687592 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.693863 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.694675 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.718511 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.755648 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.758261 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.759512 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.761201 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.765354 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.785133 4183 status_manager.go:853] "Failed to get status for pod" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" pod="openshift-marketplace/community-operators-8jhz6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8jhz6\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.801950 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.834459 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.842708 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.869897 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.901900 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.907735 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.922435 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.942988 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.963378 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:55.983100 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.004700 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.024106 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.047217 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.061301 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.081932 4183 status_manager.go:853] "Failed to get status for pod" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rmwfn\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.101674 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.122544 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.157367 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.167833 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.181304 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.201007 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.221447 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.246117 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.262127 4183 status_manager.go:853] "Failed to get status for pod" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" pod="openshift-marketplace/certified-operators-7287f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7287f\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.286681 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.301302 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.321179 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.340915 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.696466 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697332 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697924 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" exitCode=255 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.697963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.698501 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:56.698518 4183 scope.go:117] "RemoveContainer" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.706332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.715764 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.723053 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:57.726435 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.737374 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.739719 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" exitCode=0 Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.739832 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.740447 4183 scope.go:117] "RemoveContainer" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.744316 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.745125 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.748107 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca"} Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:58.748152 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:58.788129 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.288123 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.288273 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.290115 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.290189 4183 status_manager.go:853] "Failed to get status for pod" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" pod="openshift-marketplace/redhat-operators-dcqzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dcqzh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.291275 4183 status_manager.go:853] "Failed to get status for pod" podUID="b23d6435-6431-4905-b41b-a517327385e5" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-67cbf64bc9-jjfds\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.292131 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.293145 4183 status_manager.go:853] "Failed to get status for pod" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-8b455464d-f9xdt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.293268 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.294218 4183 status_manager.go:853] "Failed to get status for pod" podUID="71af81a9-7d43-49b2-9287-c375900aa905" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.295226 4183 status_manager.go:853] "Failed to get status for pod" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" pod="openshift-marketplace/community-operators-8jhz6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8jhz6\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.295645 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.295730 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.296617 4183 status_manager.go:853] "Failed to get status for pod" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" pod="openshift-marketplace/community-operators-k9qqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k9qqb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.297883 4183 status_manager.go:853] "Failed to get status for pod" podUID="bf055e84f32193b9c1c21b0c34a61f01" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.299107 4183 status_manager.go:853] "Failed to get status for pod" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" pod="openshift-marketplace/redhat-operators-f4jkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-f4jkp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.301006 4183 status_manager.go:853] "Failed to get status for pod" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" pod="openshift-marketplace/redhat-marketplace-8s8pc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8s8pc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.301906 4183 status_manager.go:853] "Failed to get status for pod" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" pod="openshift-network-node-identity/network-node-identity-7xghp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-7xghp\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.303484 4183 status_manager.go:853] "Failed to get status for pod" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" pod="openshift-console/console-644bb77b49-5x5xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-644bb77b49-5x5xk\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.305187 4183 status_manager.go:853] "Failed to get status for pod" podUID="92b2a8634cfe8a21cffcc98cc8c87160" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.306082 4183 status_manager.go:853] "Failed to get status for pod" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-546b4f8984-pwccz\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.308614 4183 status_manager.go:853] "Failed to get status for pod" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" pod="openshift-marketplace/certified-operators-g4v97" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g4v97\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.309539 4183 status_manager.go:853] "Failed to get status for pod" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-5dbbc74dc9-cp5cd\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.312005 4183 status_manager.go:853] "Failed to get status for pod" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7978d7d7f6-2nt8z\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.313185 4183 status_manager.go:853] "Failed to get status for pod" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" pod="openshift-image-registry/image-registry-7cbd5666ff-bbfrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7cbd5666ff-bbfrf\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.314671 4183 status_manager.go:853] "Failed to get status for pod" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-6f6cb54958-rbddb\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.316158 4183 status_manager.go:853] "Failed to get status for pod" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.320652 4183 status_manager.go:853] "Failed to get status for pod" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" pod="openshift-marketplace/redhat-marketplace-rmwfn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rmwfn\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.321893 4183 status_manager.go:853] "Failed to get status for pod" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.322873 4183 status_manager.go:853] "Failed to get status for pod" podUID="6268b7fe-8910-4505-b404-6f1df638105c" pod="openshift-console/downloads-65476884b9-9wcvx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-65476884b9-9wcvx\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.324685 4183 status_manager.go:853] "Failed to get status for pod" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" pod="openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-846977c6bc-7gjhh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.327030 4183 status_manager.go:853] "Failed to get status for pod" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-78d54458c4-sc8h7\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.328459 4183 status_manager.go:853] "Failed to get status for pod" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-7d46d5bb6d-rrg6t\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.329474 4183 status_manager.go:853] "Failed to get status for pod" podUID="00d32440-4cce-4609-96f3-51ac94480aab" pod="openshift-controller-manager/controller-manager-78589965b8-vmcwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78589965b8-vmcwt\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.330380 4183 status_manager.go:853] "Failed to get status for pod" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" pod="openshift-marketplace/certified-operators-7287f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-7287f\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.331342 4183 status_manager.go:853] "Failed to get status for pod" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" pod="openshift-console/console-84fccc7b6-mkncc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-84fccc7b6-mkncc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.332105 4183 status_manager.go:853] "Failed to get status for pod" podUID="79050916-d488-4806-b556-1b0078b31e53" pod="openshift-kube-controller-manager/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-10-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.332755 4183 status_manager.go:853] "Failed to get status for pod" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-7769bd8d7d-q5cvv\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.333584 4183 status_manager.go:853] "Failed to get status for pod" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" pod="openshift-kube-scheduler/installer-7-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.334273 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.334880 4183 status_manager.go:853] "Failed to get status for pod" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" pod="openshift-network-operator/network-operator-767c585db5-zd56b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-767c585db5-zd56b\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.335981 4183 status_manager.go:853] "Failed to get status for pod" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" pod="openshift-marketplace/redhat-operators-dcqzh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-dcqzh\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.337487 4183 status_manager.go:853] "Failed to get status for pod" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-77658b5b66-dq5sc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.539176 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.539344 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.776658 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:03:59 crc kubenswrapper[4183]: E0813 20:03:59.777414 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:03:59 crc kubenswrapper[4183]: I0813 20:03:59.778308 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.820526 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.836446 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.836953 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.839287 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.839373 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.868702 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.872256 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/1.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.873984 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.876957 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" exitCode=255 Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877027 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca"} Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877070 4183 scope.go:117] "RemoveContainer" containerID="1a09e11981ae9c63bb4ca1d27de2b7a914e1b4ad8edd3d0d73f1ad5239373316" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877941 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:00 crc kubenswrapper[4183]: I0813 20:04:00.877988 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:00 crc kubenswrapper[4183]: E0813 20:04:00.878661 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.912502 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83"} Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.918382 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.922374 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952170 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952209 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952863 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"92b2a8634cfe8a21cffcc98cc8c87160","Type":"ContainerStarted","Data":"da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a"} Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.952902 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.953193 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:01 crc kubenswrapper[4183]: I0813 20:04:01.953280 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.974963 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"48128e8d38b5cbcd2691da698bd9cac3","Type":"ContainerStarted","Data":"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9"} Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.983911 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.985984 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" exitCode=1 Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986118 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71"} Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986157 4183 scope.go:117] "RemoveContainer" containerID="7b2c6478f4940bab46ab22fb59aeffb640ce0f0e8ccd61b80c50a3afdd842157" Aug 13 20:04:02 crc kubenswrapper[4183]: I0813 20:04:02.986735 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:02 crc kubenswrapper[4183]: E0813 20:04:02.987548 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:03 crc kubenswrapper[4183]: I0813 20:04:03.998006 4183 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" exitCode=0 Aug 13 20:04:03 crc kubenswrapper[4183]: I0813 20:04:03.998105 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc"} Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003442 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003935 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.003971 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.004254 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.038070 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.523272 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.524281 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:04 crc kubenswrapper[4183]: E0813 20:04:04.524679 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871606 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871700 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871749 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:04 crc kubenswrapper[4183]: I0813 20:04:04.871952 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.232970 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.235698 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:06 crc kubenswrapper[4183]: I0813 20:04:06.247545 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.080683 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0"} Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.086603 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" exitCode=0 Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.086722 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.090544 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694" exitCode=0 Aug 13 20:04:07 crc kubenswrapper[4183]: I0813 20:04:07.090601 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694"} Aug 13 20:04:09 crc kubenswrapper[4183]: I0813 20:04:09.540223 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:09 crc kubenswrapper[4183]: I0813 20:04:09.542063 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.128627 4183 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" exitCode=0 Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.128731 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.139614 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerStarted","Data":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.144463 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerStarted","Data":"844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568"} Aug 13 20:04:10 crc kubenswrapper[4183]: I0813 20:04:10.584765 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:04:12 crc kubenswrapper[4183]: I0813 20:04:12.167278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843"} Aug 13 20:04:13 crc kubenswrapper[4183]: I0813 20:04:13.463032 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.370971 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.372425 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.735468 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.737108 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.872447 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.872692 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.871953 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873120 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873545 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.873658 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.876995 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} pod="openshift-console/downloads-65476884b9-9wcvx" containerMessage="Container download-server failed liveness probe, will be restarted" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.877174 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" containerID="cri-o://9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c" gracePeriod=2 Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.936494 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.937746 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.938058 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:14 crc kubenswrapper[4183]: I0813 20:04:14.938080 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.210617 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.210672 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.288075 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.288188 4183 generic.go:334] "Generic (PLEG): container finished" podID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" exitCode=1 Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.289403 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerDied","Data":"cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436"} Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.289888 4183 scope.go:117] "RemoveContainer" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" Aug 13 20:04:15 crc kubenswrapper[4183]: I0813 20:04:15.939985 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:15 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:15 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.102098 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:16 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:16 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.109451 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:16 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:16 crc kubenswrapper[4183]: > Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.247089 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.300679 4183 generic.go:334] "Generic (PLEG): container finished" podID="6268b7fe-8910-4505-b404-6f1df638105c" containerID="9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c" exitCode=0 Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.300729 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerDied","Data":"9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c"} Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.301340 4183 scope.go:117] "RemoveContainer" containerID="74df4184eccc1eab0b2fc55559bbac3d87ade106234259f3272b047110a68b24" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.305561 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.306619 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:16 crc kubenswrapper[4183]: I0813 20:04:16.307283 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021"} Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.317334 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"00e210723fa2ab3c15d1bb1e413bb28a867eb77be9c752bffa81f06d8a65f0ee"} Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.318439 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.318740 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.319123 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.321562 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:04:17 crc kubenswrapper[4183]: I0813 20:04:17.321649 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a"} Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.332105 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.334088 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.334885 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b"} Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335450 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335487 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335485 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.335605 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.336257 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:18 crc kubenswrapper[4183]: I0813 20:04:18.336333 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.211510 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.539623 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.540660 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.658478 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:19 crc kubenswrapper[4183]: I0813 20:04:19.658588 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.377545 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.666273 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.666350 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.667514 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.667578 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:04:20 crc kubenswrapper[4183]: I0813 20:04:20.847498 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:20 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:20 crc kubenswrapper[4183]: > Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.391098 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.391224 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e"} Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394316 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394375 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.394425 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.405955 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.424731 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.427524 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/2.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.428573 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.430940 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" exitCode=255 Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.431015 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b"} Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.431063 4183 scope.go:117] "RemoveContainer" containerID="807c95a3bab23454d169be67ad3880f3c2b11c9bf2ae434a29dc423b56035cca" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.432643 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:21 crc kubenswrapper[4183]: I0813 20:04:21.432698 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:21 crc kubenswrapper[4183]: E0813 20:04:21.435988 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.441900 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.444007 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.445109 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/1.log" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.449403 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" exitCode=255 Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.452444 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021"} Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.452721 4183 scope.go:117] "RemoveContainer" containerID="21969208e6f9e5d5177b9a170e1a6076e7e4022118a21462b693bf056d71642a" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.454578 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.454626 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.455260 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:22 crc kubenswrapper[4183]: I0813 20:04:22.455951 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:22 crc kubenswrapper[4183]: E0813 20:04:22.455397 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.677346 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678661 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/1.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678725 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" exitCode=1 Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678937 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e"} Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.678987 4183 scope.go:117] "RemoveContainer" containerID="b85554f0e1f346055c3ddba50c820fa4bcf10f0fb1c0952a5fa718f250783d71" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.679550 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:23 crc kubenswrapper[4183]: E0813 20:04:23.680072 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.684831 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:23 crc kubenswrapper[4183]: I0813 20:04:23.685747 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.522960 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.695084 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.695994 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:24 crc kubenswrapper[4183]: E0813 20:04:24.696619 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.871956 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872068 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872273 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:24 crc kubenswrapper[4183]: I0813 20:04:24.872125 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.531477 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:25 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:25 crc kubenswrapper[4183]: > Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.665334 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.665530 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.666412 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.666474 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:25 crc kubenswrapper[4183]: E0813 20:04:25.667564 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.707556 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:25 crc kubenswrapper[4183]: I0813 20:04:25.707921 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:25 crc kubenswrapper[4183]: E0813 20:04:25.717101 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:26 crc kubenswrapper[4183]: I0813 20:04:26.082431 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:26 crc kubenswrapper[4183]: > Aug 13 20:04:26 crc kubenswrapper[4183]: I0813 20:04:26.102356 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:26 crc kubenswrapper[4183]: > Aug 13 20:04:29 crc kubenswrapper[4183]: I0813 20:04:29.540563 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:29 crc kubenswrapper[4183]: I0813 20:04:29.541077 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:30 crc kubenswrapper[4183]: I0813 20:04:30.809386 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:30 crc kubenswrapper[4183]: > Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.872612 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873160 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873017 4183 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Aug 13 20:04:34 crc kubenswrapper[4183]: I0813 20:04:34.873257 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Aug 13 20:04:35 crc kubenswrapper[4183]: I0813 20:04:35.523618 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:35 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:35 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.055527 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:36 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.067382 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:36 crc kubenswrapper[4183]: > Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.209341 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:36 crc kubenswrapper[4183]: E0813 20:04:36.209960 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:36 crc kubenswrapper[4183]: I0813 20:04:36.941233 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="48128e8d38b5cbcd2691da698bd9cac3" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:04:38 crc kubenswrapper[4183]: I0813 20:04:38.803919 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.220261 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.220322 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:39 crc kubenswrapper[4183]: E0813 20:04:39.221136 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.437995 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.540376 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.540474 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.662007 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 20:04:39 crc kubenswrapper[4183]: I0813 20:04:39.739757 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 20:04:40 crc kubenswrapper[4183]: I0813 20:04:40.928980 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:40 crc kubenswrapper[4183]: > Aug 13 20:04:43 crc kubenswrapper[4183]: I0813 20:04:43.083757 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 20:04:44 crc kubenswrapper[4183]: I0813 20:04:44.326702 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 20:04:44 crc kubenswrapper[4183]: I0813 20:04:44.890538 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65476884b9-9wcvx" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.404275 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.410685 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.533142 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="registry-server" probeResult="failure" output=< Aug 13 20:04:45 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:04:45 crc kubenswrapper[4183]: > Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.549551 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8s8pc" Aug 13 20:04:45 crc kubenswrapper[4183]: I0813 20:04:45.559224 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.210305 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.777538 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.862868 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.862977 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077"} Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.863354 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.866328 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.866537 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:47 crc kubenswrapper[4183]: I0813 20:04:47.935187 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.415454 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.871874 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:04:48 crc kubenswrapper[4183]: I0813 20:04:48.872663 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.539935 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.540612 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.799903 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:49 crc kubenswrapper[4183]: I0813 20:04:49.943986 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.273701 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.900178 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907557 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/2.log" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907669 4183 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" exitCode=1 Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907705 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077"} Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.907743 4183 scope.go:117] "RemoveContainer" containerID="a40b12b128b1e9065da4a3aeeb59afb89c5abde3d01a932b1d00d9946d49c42e" Aug 13 20:04:50 crc kubenswrapper[4183]: I0813 20:04:50.908626 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:04:50 crc kubenswrapper[4183]: E0813 20:04:50.909163 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.210255 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.210305 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.212502 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.868191 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:04:51 crc kubenswrapper[4183]: I0813 20:04:51.917089 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.150279 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.761529 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.926570 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.928558 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:52 crc kubenswrapper[4183]: I0813 20:04:52.930835 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb"} Aug 13 20:04:53 crc kubenswrapper[4183]: I0813 20:04:53.243045 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.245119 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.494708 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.522671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.523584 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:04:54 crc kubenswrapper[4183]: E0813 20:04:54.524261 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.626589 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7287f" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714562 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714725 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714823 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.714889 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:04:54 crc kubenswrapper[4183]: I0813 20:04:54.996764 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007074 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/0.log" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007175 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" exitCode=1 Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007251 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b"} Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.007368 4183 scope.go:117] "RemoveContainer" containerID="957c48a64bf505f55933cfc9cf99bce461d72f89938aa38299be4b2e4c832fb2" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.008069 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:04:55 crc kubenswrapper[4183]: E0813 20:04:55.008829 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:04:55 crc kubenswrapper[4183]: I0813 20:04:55.904963 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.019162 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.020146 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.021084 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9"} Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.024920 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.452492 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 20:04:56 crc kubenswrapper[4183]: I0813 20:04:56.474971 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.089106 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.629887 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 20:04:57 crc kubenswrapper[4183]: I0813 20:04:57.789896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.152330 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.472077 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.562995 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.675559 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 20:04:58 crc kubenswrapper[4183]: I0813 20:04:58.893419 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.073153 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.075333 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/3.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.076138 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077032 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" exitCode=255 Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077097 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9"} Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.077146 4183 scope.go:117] "RemoveContainer" containerID="ba82d955226ea1e51a72b2bf71d781c65d24d78e4274d8a9bbb39973d6793c6b" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.078341 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:04:59 crc kubenswrapper[4183]: E0813 20:04:59.078943 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.135243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.541093 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.542262 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:04:59 crc kubenswrapper[4183]: I0813 20:04:59.886707 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.090156 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.093150 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.094540 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/2.log" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095262 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" exitCode=255 Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095305 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb"} Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.095764 4183 scope.go:117] "RemoveContainer" containerID="d703fa1aef3414ff17f21755cb4d9348dcee4860bbb97e5def23b2a5e008c021" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.096302 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.096440 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:00 crc kubenswrapper[4183]: E0813 20:05:00.097254 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.114000 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.665449 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.666145 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.668984 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.817164 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.860638 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.880066 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 20:05:00 crc kubenswrapper[4183]: I0813 20:05:00.922569 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.004185 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.104914 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.106219 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.110562 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.110684 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:01 crc kubenswrapper[4183]: E0813 20:05:01.114138 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.669639 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.802689 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:05:01 crc kubenswrapper[4183]: I0813 20:05:01.997359 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.075704 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.114082 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.114415 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:02 crc kubenswrapper[4183]: E0813 20:05:02.115311 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.270366 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.361686 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.462052 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 20:05:02 crc kubenswrapper[4183]: I0813 20:05:02.876429 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.121445 4183 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" exitCode=0 Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.121510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6"} Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.534136 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 20:05:03 crc kubenswrapper[4183]: I0813 20:05:03.821185 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.024845 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.357290 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.467645 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 20:05:04 crc kubenswrapper[4183]: I0813 20:05:04.598329 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.140521 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636"} Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.415288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.666656 4183 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.667611 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:05 crc kubenswrapper[4183]: I0813 20:05:05.667649 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:05 crc kubenswrapper[4183]: E0813 20:05:05.668446 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.189768 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.210115 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:06 crc kubenswrapper[4183]: E0813 20:05:06.210718 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.251707 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.252974 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.298212 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.311324 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.543153 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 20:05:06 crc kubenswrapper[4183]: I0813 20:05:06.788729 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.130607 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.210910 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.426231 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.680896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 20:05:07 crc kubenswrapper[4183]: I0813 20:05:07.833891 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.170083 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.170396 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.172414 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44"} Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.376013 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.627849 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.740880 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.759596 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 20:05:08 crc kubenswrapper[4183]: I0813 20:05:08.778671 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.182649 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" exitCode=0 Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.182831 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.369612 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.540462 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.540555 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:05:09 crc kubenswrapper[4183]: I0813 20:05:09.816105 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.072842 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.317996 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.322904 4183 generic.go:334] "Generic (PLEG): container finished" podID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" containerID="9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4" exitCode=255 Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.322974 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerDied","Data":"9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4"} Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.324271 4183 scope.go:117] "RemoveContainer" containerID="9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.500315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.650605 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 20:05:10 crc kubenswrapper[4183]: I0813 20:05:10.861252 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.112401 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.336302 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerStarted","Data":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.339472 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7874c8775-kh4j9_ec1bae8b-3200-4ad9-b33b-cf8701f3027c/machine-approver-controller/0.log" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.340602 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"b6fafe7cac89983f8701bc5ed1df09e2b82c358b3a757377ca15de6546b5eb9f"} Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.411131 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.707689 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 20:05:11 crc kubenswrapper[4183]: I0813 20:05:11.739312 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.205833 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.599179 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 20:05:12 crc kubenswrapper[4183]: I0813 20:05:12.955315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.078966 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.098878 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.112587 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=172.083720651 podStartE2EDuration="2m52.083720651s" podCreationTimestamp="2025-08-13 20:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:04:37.37903193 +0000 UTC m=+1244.071696868" watchObservedRunningTime="2025-08-13 20:05:13.083720651 +0000 UTC m=+1279.776385389" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.116733 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g4v97" podStartSLOduration=35619880.42286533 podStartE2EDuration="9894h30m55.116660334s" podCreationTimestamp="2024-06-27 13:34:18 +0000 UTC" firstStartedPulling="2025-08-13 19:57:52.840933971 +0000 UTC m=+839.533598689" lastFinishedPulling="2025-08-13 20:04:07.534728981 +0000 UTC m=+1214.227393689" observedRunningTime="2025-08-13 20:04:38.881376951 +0000 UTC m=+1245.574041929" watchObservedRunningTime="2025-08-13 20:05:13.116660334 +0000 UTC m=+1279.809325042" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.117062 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rmwfn" podStartSLOduration=35620009.78697888 podStartE2EDuration="9894h31m39.117029724s" podCreationTimestamp="2024-06-27 13:33:34 +0000 UTC" firstStartedPulling="2025-08-13 19:59:18.068965491 +0000 UTC m=+924.761630139" lastFinishedPulling="2025-08-13 20:04:07.399016379 +0000 UTC m=+1214.091680987" observedRunningTime="2025-08-13 20:04:39.012673861 +0000 UTC m=+1245.705338829" watchObservedRunningTime="2025-08-13 20:05:13.117029724 +0000 UTC m=+1279.809694442" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.208428 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-846977c6bc-7gjhh","openshift-controller-manager/controller-manager-78589965b8-vmcwt","openshift-image-registry/image-registry-7cbd5666ff-bbfrf","openshift-console/console-84fccc7b6-mkncc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209287 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209340 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="5e53e26d-e94d-45dc-b706-677ed667c8ce" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209479 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.209510 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="09143b32-bfcb-4682-a82f-e0bfa420e445" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.224634 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d32440-4cce-4609-96f3-51ac94480aab" path="/var/lib/kubelet/pods/00d32440-4cce-4609-96f3-51ac94480aab/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.226609 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" path="/var/lib/kubelet/pods/42b6a393-6194-4620-bf8f-7e4b6cbe5679/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.229290 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" path="/var/lib/kubelet/pods/b233d916-bfe3-4ae5-ae39-6b574d1aa05e/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.231822 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" path="/var/lib/kubelet/pods/ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d/volumes" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.233054 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx","openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.237345 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" podNamespace="openshift-controller-manager" podName="controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.249551 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250646 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.250739 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250754 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.250970 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.250988 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251000 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251008 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251030 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251037 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251050 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251060 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: E0813 20:05:13.251074 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.251082 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252436 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252897 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252925 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d32440-4cce-4609-96f3-51ac94480aab" containerName="controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252938 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252952 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b233d916-bfe3-4ae5-ae39-6b574d1aa05e" containerName="console" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252966 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee23bfc7-1e7a-4bb4-80c0-6a228a1f6d2d" containerName="route-controller-manager" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252982 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b6a393-6194-4620-bf8f-7e4b6cbe5679" containerName="registry" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.252995 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="79050916-d488-4806-b556-1b0078b31e53" containerName="installer" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.267733 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.269541 4183 topology_manager.go:215] "Topology Admit Handler" podUID="becc7e17-2bc7-417d-832f-55127299d70f" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.269755 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.272943 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.276321 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.282374 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.282731 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.289509 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292292 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292390 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292493 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292496 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292912 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.292984 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.293303 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.293451 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.307677 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.394716 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.408564 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410401 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410445 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410484 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410552 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410603 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410646 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410715 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.410887 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.462438 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512368 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512461 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512498 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512528 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512562 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512598 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512622 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.512684 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648609 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648683 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.648763 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.649909 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.651487 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.655027 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.676275 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.677413 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.954091 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"route-controller-manager-6884dcf749-n4qpx\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:13 crc kubenswrapper[4183]: I0813 20:05:13.958326 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"controller-manager-598fc85fd4-8wlsm\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.023275 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=59.023213394 podStartE2EDuration="59.023213394s" podCreationTimestamp="2025-08-13 20:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:14.020333212 +0000 UTC m=+1280.712998070" watchObservedRunningTime="2025-08-13 20:05:14.023213394 +0000 UTC m=+1280.715878202" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.066177 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k9qqb" podStartSLOduration=35619820.18712965 podStartE2EDuration="9894h30m58.066128853s" podCreationTimestamp="2024-06-27 13:34:16 +0000 UTC" firstStartedPulling="2025-08-13 19:57:51.83654203 +0000 UTC m=+838.529206798" lastFinishedPulling="2025-08-13 20:05:09.715541279 +0000 UTC m=+1276.408206007" observedRunningTime="2025-08-13 20:05:14.064306021 +0000 UTC m=+1280.756970859" watchObservedRunningTime="2025-08-13 20:05:14.066128853 +0000 UTC m=+1280.758793581" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.128077 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.204184 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.205979 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.205874035 podStartE2EDuration="59.205874035s" podCreationTimestamp="2025-08-13 20:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:14.19801498 +0000 UTC m=+1280.890679758" watchObservedRunningTime="2025-08-13 20:05:14.205874035 +0000 UTC m=+1280.898539443" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.214829 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.222339 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.255305 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.565414 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.565913 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.669956 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 20:05:14 crc kubenswrapper[4183]: I0813 20:05:14.855193 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.152712 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.309951 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.628243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.658057 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.686472 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:15 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:15 crc kubenswrapper[4183]: > Aug 13 20:05:15 crc kubenswrapper[4183]: I0813 20:05:15.781369 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.344985 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.485318 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.513489 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 20:05:16 crc kubenswrapper[4183]: I0813 20:05:16.789608 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146002 4183 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146600 4183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146629 4183 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626" Netns:"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod "controller-manager-598fc85fd4-8wlsm" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.146742 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-598fc85fd4-8wlsm_openshift-controller-manager(8b8d1c48-5762-450f-bd4d-9134869f432b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-598fc85fd4-8wlsm_openshift-controller-manager(8b8d1c48-5762-450f-bd4d-9134869f432b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-598fc85fd4-8wlsm_openshift-controller-manager_8b8d1c48-5762-450f-bd4d-9134869f432b_0(ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626): error adding pod openshift-controller-manager_controller-manager-598fc85fd4-8wlsm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626\\\" Netns:\\\"/var/run/netns/5532d8e4-703c-425a-acfc-595dd19fe6e2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-598fc85fd4-8wlsm;K8S_POD_INFRA_CONTAINER_ID=ace62b16cf271e4d6faf88bca4a6f7972a49d06e06e546ef9f1226bfa31e4626;K8S_POD_UID=8b8d1c48-5762-450f-bd4d-9134869f432b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm] networking: Multus: [openshift-controller-manager/controller-manager-598fc85fd4-8wlsm/8b8d1c48-5762-450f-bd4d-9134869f432b]: error waiting for pod: pod \\\"controller-manager-598fc85fd4-8wlsm\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185604 4183 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185687 4183 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.185746 4183 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err=< Aug 13 20:05:17 crc kubenswrapper[4183]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23" Netns:"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod "route-controller-manager-6884dcf749-n4qpx" not found Aug 13 20:05:17 crc kubenswrapper[4183]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Aug 13 20:05:17 crc kubenswrapper[4183]: > pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.186516 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager(becc7e17-2bc7-417d-832f-55127299d70f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager(becc7e17-2bc7-417d-832f-55127299d70f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6884dcf749-n4qpx_openshift-route-controller-manager_becc7e17-2bc7-417d-832f-55127299d70f_0(d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23): error adding pod openshift-route-controller-manager_route-controller-manager-6884dcf749-n4qpx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23\\\" Netns:\\\"/var/run/netns/fc943dc9-e5f3-426f-a251-ab81064f93c0\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6884dcf749-n4qpx;K8S_POD_INFRA_CONTAINER_ID=d71fafccd793fd37294a5350fabb9749f483362ed8df8f4d3693083c86399c23;K8S_POD_UID=becc7e17-2bc7-417d-832f-55127299d70f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx/becc7e17-2bc7-417d-832f-55127299d70f]: error waiting for pod: pod \\\"route-controller-manager-6884dcf749-n4qpx\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.209062 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.209095 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:17 crc kubenswrapper[4183]: E0813 20:05:17.209766 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.297640 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.302574 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.381660 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.509832 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.625271 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 20:05:17 crc kubenswrapper[4183]: I0813 20:05:17.792176 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.175892 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.243339 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.321978 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.494179 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497100 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/0.log" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497201 4183 generic.go:334] "Generic (PLEG): container finished" podID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" exitCode=255 Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497239 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerDied","Data":"0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a"} Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.497284 4183 scope.go:117] "RemoveContainer" containerID="cde7b91dcd48d4e06df4d6dec59646da2d7b63ba4245f33286ad238c06706436" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.498290 4183 scope.go:117] "RemoveContainer" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" Aug 13 20:05:18 crc kubenswrapper[4183]: E0813 20:05:18.499112 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"control-plane-machine-set-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=control-plane-machine-set-operator pod=control-plane-machine-set-operator-649bd778b4-tt5tw_openshift-machine-api(45a8038e-e7f2-4d93-a6f5-7753aa54e63f)\"" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.666389 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.818229 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.818437 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.875753 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.977189 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.995738 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:18 crc kubenswrapper[4183]: I0813 20:05:18.996007 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:18.996970 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.517497 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540079 4183 patch_prober.go:28] interesting pod/console-5d9678894c-wx62n container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540285 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.57:8443/health\": dial tcp 10.217.0.57:8443: connect: connection refused" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.540403 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.545389 4183 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} pod="openshift-console/console-5d9678894c-wx62n" containerMessage="Container console failed startup probe, will be restarted" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.589297 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.700554 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.700751 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.709757 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:19 crc kubenswrapper[4183]: I0813 20:05:19.977120 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:19 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:19 crc kubenswrapper[4183]: > Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.011674 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.084552 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.210537 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:20 crc kubenswrapper[4183]: E0813 20:05:20.211602 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-8b455464d-f9xdt_openshift-marketplace(3482be94-0cdb-4e2a-889b-e5fac59fdbf5)\"" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.219236 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.743720 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 20:05:20 crc kubenswrapper[4183]: I0813 20:05:20.867244 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.066612 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.505896 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.552288 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:21 crc kubenswrapper[4183]: I0813 20:05:21.669562 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.088839 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.293069 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.369896 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.609190 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.715427 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:05:22 crc kubenswrapper[4183]: I0813 20:05:22.789590 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.111893 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.279471 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.553213 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerStarted","Data":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.553762 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.554111 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerStarted","Data":"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.556456 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.556537 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.557599 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerStarted","Data":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.557658 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerStarted","Data":"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7"} Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.558583 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.560568 4183 patch_prober.go:28] interesting pod/route-controller-manager-6884dcf749-n4qpx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" start-of-body= Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.560953 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.75:8443/healthz\": dial tcp 10.217.0.75:8443: connect: connection refused" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.636023 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podStartSLOduration=242.635956854 podStartE2EDuration="4m2.635956854s" podCreationTimestamp="2025-08-13 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:23.62989526 +0000 UTC m=+1290.322560408" watchObservedRunningTime="2025-08-13 20:05:23.635956854 +0000 UTC m=+1290.328621982" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.706151 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.827966 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 20:05:23 crc kubenswrapper[4183]: I0813 20:05:23.949042 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.086654 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.125475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.191367 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.205474 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.205611 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.365075 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.567394 4183 patch_prober.go:28] interesting pod/controller-manager-598fc85fd4-8wlsm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" start-of-body= Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.567502 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": dial tcp 10.217.0.74:8443: connect: connection refused" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.815329 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.826046 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 20:05:24 crc kubenswrapper[4183]: I0813 20:05:24.927063 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podStartSLOduration=241.926998625 podStartE2EDuration="4m1.926998625s" podCreationTimestamp="2025-08-13 20:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:23.71475329 +0000 UTC m=+1290.407418348" watchObservedRunningTime="2025-08-13 20:05:24.926998625 +0000 UTC m=+1291.619663633" Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.203459 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="48ddb06f60b4f68d09a2a539638fcf41c8d68761518ac0ef54f91af62a4bb107" Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.207311 4183 kuberuntime_manager.go:1262] container &Container{Name:console,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae,Command:[/opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-config.yaml --service-ca-file=/var/service-ca/service-ca.crt --v=2],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:console-serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-oauth-config,ReadOnly:true,MountPath:/var/oauth-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:console-config,ReadOnly:true,MountPath:/var/console-config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:service-ca,ReadOnly:true,MountPath:/var/service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:oauth-serving-cert,ReadOnly:true,MountPath:/var/oauth-serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2nz92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[sleep 25],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000590000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod console-644bb77b49-5x5xk_openshift-console(9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1): CreateContainerError: context deadline exceeded Aug 13 20:05:25 crc kubenswrapper[4183]: E0813 20:05:25.207440 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.770618 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.843280 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 20:05:25 crc kubenswrapper[4183]: I0813 20:05:25.898295 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.203430 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:26 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:26 crc kubenswrapper[4183]: > Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.342830 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.352289 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.531826 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="07c341dd7186a1b00e23f13a401a9b19e5d1744c38a4a91d135cf6cc1891fe61" Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.532359 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcfcd442b37827a1acbd2953c1e4f8103f31fec151e6666b9c5bb0045feada8f,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-5d9b995f6b-fcgd7_openshift-kube-scheduler-operator(71af81a9-7d43-49b2-9287-c375900aa905): CreateContainerError: context deadline exceeded Aug 13 20:05:26 crc kubenswrapper[4183]: E0813 20:05:26.532539 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.533765 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.583286 4183 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" exitCode=0 Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.583384 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f"} Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.588158 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed" exitCode=0 Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.588850 4183 scope.go:117] "RemoveContainer" containerID="e2ed40c9bc30c8fdbb04088362ef76212a522ea5070f999ce3dc603f8c7a487e" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.589271 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed"} Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.655378 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.734553 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.770986 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.829223 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.840965 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.850381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 20:05:26 crc kubenswrapper[4183]: I0813 20:05:26.912068 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.416399 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 20:05:27 crc kubenswrapper[4183]: E0813 20:05:27.518744 4183 handlers.go:79] "Exec lifecycle hook for Container in Pod failed" err="command 'sleep 25' exited with 137: " execCommand=["sleep","25"] containerName="console" pod="openshift-console/console-5d9678894c-wx62n" message="" Aug 13 20:05:27 crc kubenswrapper[4183]: E0813 20:05:27.519483 4183 kuberuntime_container.go:653] "PreStop hook failed" err="command 'sleep 25' exited with 137: " pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.519589 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" gracePeriod=33 Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.588263 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.601125 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"d329928035eabc24218bf53782983e5317173e1aceaf58f4d858919ca11603ad"} Aug 13 20:05:27 crc kubenswrapper[4183]: I0813 20:05:27.732427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.175705 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.615064 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"aef36bd2553b9941561332862e00ec117b296eb1e04d6191f7d1a0e272134312"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.621703 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.621932 4183 generic.go:334] "Generic (PLEG): container finished" podID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerID="bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" exitCode=255 Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.622022 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba"} Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628458 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="fe503da15decef9b50942972e3f741dba12102460aee1b1db682f945b69c1239" Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628643 4183 kuberuntime_manager.go:1262] container &Container{Name:cluster-image-registry-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,Command:[],Args:[--files=/var/run/configmaps/trusted-ca/tls-ca-bundle.pem --files=/etc/secrets/tls.crt --files=/etc/secrets/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cluster-image-registry-operator,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d97bc5ceeb803fbb8b6f82967607071bcbcf0540932be1b9f59fc5e29e8c646d,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52096a0ef4d0f0ac66bf0d6c0924464d59aee852f6e83195b7c1608de4a289b8,ValueFrom:nil,},EnvVar{Name:IMAGE_PRUNER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:AZURE_ENVIRONMENT_FILEPATH,Value:/tmp/azurestackcloud.json,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:trusted-ca,ReadOnly:false,MountPath:/var/run/configmaps/trusted-ca/,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:image-registry-operator-tls,ReadOnly:false,MountPath:/etc/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9x6dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000290000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-image-registry-operator-7769bd8d7d-q5cvv_openshift-image-registry(b54e8941-2fc4-432a-9e51-39684df9089e): CreateContainerError: context deadline exceeded Aug 13 20:05:28 crc kubenswrapper[4183]: E0813 20:05:28.628687 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-image-registry-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.632001 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.640740 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerStarted","Data":"a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9"} Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.744903 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.782051 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-console/console-644bb77b49-5x5xk" podStartSLOduration=258.782001936 podStartE2EDuration="4m18.782001936s" podCreationTimestamp="2025-08-13 20:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:28.78074536 +0000 UTC m=+1295.473410118" watchObservedRunningTime="2025-08-13 20:05:28.782001936 +0000 UTC m=+1295.474666664" Aug 13 20:05:28 crc kubenswrapper[4183]: I0813 20:05:28.844642 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.059601 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.060691 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" containerID="cri-o://15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" gracePeriod=5 Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.563129 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.647320 4183 scope.go:117] "RemoveContainer" containerID="dd7033f12f10dfa562ecc04746779666b1a34bddfcb245d6e2353cc2c05cc540" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.648997 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.649295 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:29 crc kubenswrapper[4183]: I0813 20:05:29.974239 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.211475 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.211526 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:30 crc kubenswrapper[4183]: E0813 20:05:30.212347 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"openshift-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\", failed to \"StartContainer\" for \"openshift-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-apiserver-check-endpoints pod=apiserver-67cbf64bc9-jjfds_openshift-apiserver(b23d6435-6431-4905-b41b-a517327385e5)\"]" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.226111 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.269216 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dcqzh" podStartSLOduration=35619822.16397022 podStartE2EDuration="9894h31m16.269154122s" podCreationTimestamp="2024-06-27 13:34:14 +0000 UTC" firstStartedPulling="2025-08-13 19:57:52.841939639 +0000 UTC m=+839.534604367" lastFinishedPulling="2025-08-13 20:05:26.947123582 +0000 UTC m=+1293.639788270" observedRunningTime="2025-08-13 20:05:30.047038901 +0000 UTC m=+1296.739703649" watchObservedRunningTime="2025-08-13 20:05:30.269154122 +0000 UTC m=+1296.961818970" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.469599 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.469728 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.475036 4183 patch_prober.go:28] interesting pod/console-644bb77b49-5x5xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.475118 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.654393 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:30 crc kubenswrapper[4183]: > Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.657994 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:05:30 crc kubenswrapper[4183]: I0813 20:05:30.658370 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerStarted","Data":"1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b"} Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.188512 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.227737 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.434834 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.543125 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:31 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:31 crc kubenswrapper[4183]: > Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.663391 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.670843 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"8c343d7ff4e8fd8830942fe00e0e9953854c7d57807d54ef2fb25d9d7bd48b55"} Aug 13 20:05:31 crc kubenswrapper[4183]: I0813 20:05:31.713016 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.209841 4183 scope.go:117] "RemoveContainer" containerID="0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.209982 4183 scope.go:117] "RemoveContainer" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.802208 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:05:32 crc kubenswrapper[4183]: I0813 20:05:32.847086 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.158289 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="d3db60615905e44dc8f118e1544f7eb252e9b396f1af3b926339817c7ce1ed71" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.159038 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2cc5ae1e097b03db862f962be571c386e3ec338e71a053a8dd844a93fb4c31dc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8dcvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-77658b5b66-dq5sc_openshift-config-operator(530553aa-0a1d-423e-8a22-f5eb4bdbb883): CreateContainerError: context deadline exceeded Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.159218 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.172930 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="2aed5bade7f294b09e25840fe64b91ca7e8460e350e656827bd2648f0721976d" Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.173636 4183 kuberuntime_manager.go:1262] container &Container{Name:kube-controller-manager-operator,Image:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9d6201c776053346ebce8f90c34797a7a7c05898008e17f3ba9673f5f14507b0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.29.5,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-6f6cb54958-rbddb_openshift-kube-controller-manager-operator(c1620f19-8aa3-45cf-931b-7ae0e5cd14cf): CreateContainerError: context deadline exceeded Aug 13 20:05:33 crc kubenswrapper[4183]: E0813 20:05:33.173894 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.442259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.701413 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.713829 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-f9xdt_3482be94-0cdb-4e2a-889b-e5fac59fdbf5/marketplace-operator/3.log" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.714252 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9"} Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.714456 4183 scope.go:117] "RemoveContainer" containerID="de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.718388 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.720037 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.720403 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.722308 4183 scope.go:117] "RemoveContainer" containerID="a82f834c3402db4242f753141733e4ebdbbd2a9132e9ded819a1a24bce37e03b" Aug 13 20:05:33 crc kubenswrapper[4183]: I0813 20:05:33.869762 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.166226 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.212181 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.249053 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.312330 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.445945 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_bf055e84f32193b9c1c21b0c34a61f01/startup-monitor/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.446088 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526706 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526756 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.526920 4183 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.527030 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620106 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620218 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620246 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620339 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.620378 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") pod \"bf055e84f32193b9c1c21b0c34a61f01\" (UID: \"bf055e84f32193b9c1c21b0c34a61f01\") " Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623328 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log" (OuterVolumeSpecName: "var-log") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623312 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests" (OuterVolumeSpecName: "manifests") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623479 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.623721 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.658206 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "bf055e84f32193b9c1c21b0c34a61f01" (UID: "bf055e84f32193b9c1c21b0c34a61f01"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.702693 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.703227 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722528 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722593 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722607 4183 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-manifests\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722622 4183 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.722636 4183 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bf055e84f32193b9c1c21b0c34a61f01-var-log\") on node \"crc\" DevicePath \"\"" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.742655 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/1.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.743210 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"6e2b2ebcbabf5c1d8517ce153f68731713702ba7ac48dbbb35aa2337043be534"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.749146 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760219 4183 generic.go:334] "Generic (PLEG): container finished" podID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" containerID="de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e" exitCode=255 Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerDied","Data":"de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.760945 4183 scope.go:117] "RemoveContainer" containerID="de6ce3128562801aa3c24e80d49667d2d193ade88fcdf509085e61d3d048041e" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.780158 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"95ea01f530cb8f9c84220be232e511a271a9480b103ab0095af603077e0cb252"} Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.781288 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.787186 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_bf055e84f32193b9c1c21b0c34a61f01/startup-monitor/0.log" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.787250 4183 generic.go:334] "Generic (PLEG): container finished" podID="bf055e84f32193b9c1c21b0c34a61f01" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" exitCode=137 Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788154 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788564 4183 scope.go:117] "RemoveContainer" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.788989 4183 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.789131 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.951503 4183 scope.go:117] "RemoveContainer" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: E0813 20:05:34.952199 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": container with ID starting with 15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268 not found: ID does not exist" containerID="15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268" Aug 13 20:05:34 crc kubenswrapper[4183]: I0813 20:05:34.952261 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268"} err="failed to get container status \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": rpc error: code = NotFound desc = could not find container \"15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268\": container with ID starting with 15820ab514a1ec9c31d0791a36dbd2a502fe86541e3878da038ece782fc81268 not found: ID does not exist" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.225693 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf055e84f32193b9c1c21b0c34a61f01" path="/var/lib/kubelet/pods/bf055e84f32193b9c1c21b0c34a61f01/volumes" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.229141 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.232216 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.311740 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.321850 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.321937 4183 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="423c3b23-c4c1-4055-868d-65e7387f40ce" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.341507 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.341580 4183 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="423c3b23-c4c1-4055-868d-65e7387f40ce" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.386306 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 20:05:35 crc kubenswrapper[4183]: I0813 20:05:35.800662 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"a91ec548a60f506a0a73fce12c0a6b3a787ccba29077a1f7d43da8a738f473d2"} Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.031690 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:36 crc kubenswrapper[4183]: > Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.140880 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.301833 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:36 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:36 crc kubenswrapper[4183]: > Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.511890 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.812216 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/0.log" Aug 13 20:05:36 crc kubenswrapper[4183]: I0813 20:05:36.812973 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"4dd7298bc15ad94ac15b2586221cba0590f58e6667404ba80b077dc597db4950"} Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200104 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved" podSandboxID="489c96bd95d523f4b7e59e72e928433dfb6870d719899f788f393fc315d5c1f5" Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200320 4183 kuberuntime_manager.go:1262] container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:527c3ad8df5e881e720ffd8d0f498c3fbb7727c280c51655d6c83c747373c611,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4f852821a513f8bab2eae4047b6c603e36a7cd202001638900ca14fab436403,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fd628f40d321354832b0f409d2bf9b89910de27bc6263a4fb5a55c25e160a99,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l8bxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator(0f394926-bdb9-425c-b36e-264d7fd34550): CreateContainerError: kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved Aug 13 20:05:37 crc kubenswrapper[4183]: E0813 20:05:37.200385 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CreateContainerError: \"kubelet may be retrying requests that are timing out in CRI-O due to system load. Currently at stage container storage creation: the requested container k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 is now ready and will be provided to the kubelet on next retry: error reserving ctr name k8s_openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_openshift-controller-manager-operator_0f394926-bdb9-425c-b36e-264d7fd34550_1 for id 5311a227522754649347ee221cf50be9f546f8a870582594bc726558a6fab7f5: name is reserved\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.344231 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.464262 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.819730 4183 scope.go:117] "RemoveContainer" containerID="30bf5390313371a8f7b0bd5cd736b789b0d1779681e69eff1d8e1c6c5c72d56d" Aug 13 20:05:37 crc kubenswrapper[4183]: I0813 20:05:37.905756 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.438414 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.835543 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7978d7d7f6-2nt8z_0f394926-bdb9-425c-b36e-264d7fd34550/openshift-controller-manager-operator/0.log" Aug 13 20:05:38 crc kubenswrapper[4183]: I0813 20:05:38.836025 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"18768e4e615786eedd49b25431da2fe5b5aaf29e37914eddd9e94881eac5e8c1"} Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.019126 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.153324 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.188592 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.261904 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.538769 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.538986 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.550611 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.671238 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 20:05:39 crc kubenswrapper[4183]: I0813 20:05:39.854671 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.093265 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.161234 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:40 crc kubenswrapper[4183]: > Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.347047 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.397675 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.468081 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.475820 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.483262 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-644bb77b49-5x5xk" Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.708985 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:05:40 crc kubenswrapper[4183]: I0813 20:05:40.830628 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:40 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:40 crc kubenswrapper[4183]: > Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.179381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226057 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="8d494f516ab462fe0efca4e10a5bd10552cb52fe8198ca66dbb92b9402c1eae4" Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226360 4183 kuberuntime_manager.go:1262] container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,Command:[/bin/bash -c #!/bin/bash Aug 13 20:05:41 crc kubenswrapper[4183]: set -o allexport Aug 13 20:05:41 crc kubenswrapper[4183]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Aug 13 20:05:41 crc kubenswrapper[4183]: source /etc/kubernetes/apiserver-url.env Aug 13 20:05:41 crc kubenswrapper[4183]: else Aug 13 20:05:41 crc kubenswrapper[4183]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Aug 13 20:05:41 crc kubenswrapper[4183]: exit 1 Aug 13 20:05:41 crc kubenswrapper[4183]: fi Aug 13 20:05:41 crc kubenswrapper[4183]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Aug 13 20:05:41 crc kubenswrapper[4183]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.16.0,ValueFrom:nil,},EnvVar{Name:SDN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ec002699d6fa111b93b08bda974586ae4018f4a52d1cbfd0995e6dc9c732151,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce3a9355a4497b51899867170943d34bbc2d2b7996d9a002c103797bd828d71b,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b589a20426aa14440a5e226ccd7f08c3efb23f45a2d687d71c9b399967adfa45,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b752ed92e0354fc8b4316fc7fc482d136a19212ae4a886515db81e8f0b37648,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:329299206a95c4bc22e9175de3c3dbedc8e44048aaa7d07e83eafb3e14a3a30f,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fbcb795269ddc7387faae477e57569282f87193d8f9c6130efffc8c7480dd73,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f410c53f8634b7827203f15862c05b6872e3d2e7ec59799b27a6b414943469d8,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0791454224e2ec76fd43916220bd5ae55bf18f37f0cd571cb05c76e1d791453,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:476d419f3e57548b58f62712e3994b6e6d4a6ca45c5a462f71b7b8e5f137a208,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc389b05ef555b742646390ef180ad25a8f5111c68fec6df1cfa1c6c492e98da,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2cb35b80655d65c4b64e2298483814e2abac94eef5497089ee1e03234f4fc,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc5f4b6565d37bd875cdb42e95372128231218fb8741f640b09565d9dcea2cb1,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6994ed1b1593f7638e3a8732c503356885a02dc245451ceddc3809f61023dce,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4sfhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-767c585db5-zd56b_openshift-network-operator(cc291782-27d2-4a74-af79-c7dcb31535d2): CreateContainerError: context deadline exceeded Aug 13 20:05:41 crc kubenswrapper[4183]: E0813 20:05:41.226433 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-network-operator/network-operator-767c585db5-zd56b" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.666475 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:05:41 crc kubenswrapper[4183]: I0813 20:05:41.869956 4183 scope.go:117] "RemoveContainer" containerID="ed0bd4acf60db8ba97d418227b69f1642a60426ea16a5be0171dbc8fe3780dce" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.828248 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.878397 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:05:42 crc kubenswrapper[4183]: I0813 20:05:42.880586 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"c97fff743291294c8c2671715b19a9576ef9f434134cc0f02b695dbc32284d86"} Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.209312 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.209366 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.884551 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.897724 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.900136 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:43 crc kubenswrapper[4183]: I0813 20:05:43.902595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a"} Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.278440 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.316338 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.541374 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.817110 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.916519 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.918705 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:44 crc kubenswrapper[4183]: I0813 20:05:44.920139 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerStarted","Data":"b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289"} Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.013856 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jhz6" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.089826 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podStartSLOduration=304.089658085 podStartE2EDuration="5m4.089658085s" podCreationTimestamp="2025-08-13 20:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:05:45.042200056 +0000 UTC m=+1311.734864874" watchObservedRunningTime="2025-08-13 20:05:45.089658085 +0000 UTC m=+1311.782322903" Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.250964 4183 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="282af480c29eba88e80ad94d58f4ba7eb51ae6c6558514585728acae3448d722" Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.251273 4183 kuberuntime_manager.go:1262] container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.16.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d9vhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-546b4f8984-pwccz_openshift-service-ca-operator(6d67253e-2acd-4bc1-8185-793587da4f17): CreateContainerError: context deadline exceeded Aug 13 20:05:45 crc kubenswrapper[4183]: E0813 20:05:45.251332 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CreateContainerError: \"context deadline exceeded\"" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.327881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.665239 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.665483 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.901482 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:45 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:45 crc kubenswrapper[4183]: > Aug 13 20:05:45 crc kubenswrapper[4183]: I0813 20:05:45.927429 4183 scope.go:117] "RemoveContainer" containerID="de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc" Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.596218 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]log ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:05:46 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 20:05:46 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:05:46 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:05:46 crc kubenswrapper[4183]: healthz check failed Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.596345 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:05:46 crc kubenswrapper[4183]: I0813 20:05:46.938478 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"7bc73c64b9d7e197b77d0f43ab147a148818682c82020be549d82802a07420f4"} Aug 13 20:05:48 crc kubenswrapper[4183]: I0813 20:05:48.956385 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:49 crc kubenswrapper[4183]: I0813 20:05:49.169157 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:05:49 crc kubenswrapper[4183]: I0813 20:05:49.521961 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.699518 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.716124 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:05:50 crc kubenswrapper[4183]: I0813 20:05:50.778479 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:50 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:50 crc kubenswrapper[4183]: > Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.716496 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718307 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718444 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718554 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.718680 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:05:54 crc kubenswrapper[4183]: I0813 20:05:54.748040 4183 scope.go:117] "RemoveContainer" containerID="47fe4a48f20f31be64ae9b101ef8f82942a11a5dc253da7cd8d82bea357cc9c7" Aug 13 20:05:55 crc kubenswrapper[4183]: I0813 20:05:55.816884 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:05:55 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:05:55 crc kubenswrapper[4183]: > Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.068190 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.070513 4183 topology_manager.go:215] "Topology Admit Handler" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" podNamespace="openshift-kube-controller-manager" podName="installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: E0813 20:05:57.072133 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.072184 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.072369 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf055e84f32193b9c1c21b0c34a61f01" containerName="startup-monitor" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.073129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.078051 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.080371 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.117579 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165299 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165405 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.165432 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.266818 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267099 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267202 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267699 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.267745 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.298670 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"installer-10-retry-1-crc\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.402598 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.861827 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.862628 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" containerID="cri-o://b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" gracePeriod=90 Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.862709 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" gracePeriod=90 Aug 13 20:05:57 crc kubenswrapper[4183]: I0813 20:05:57.989886 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-retry-1-crc"] Aug 13 20:05:58 crc kubenswrapper[4183]: I0813 20:05:58.042959 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerStarted","Data":"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec"} Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.055571 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver-check-endpoints/4.log" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.056695 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058388 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" exitCode=0 Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058470 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289"} Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.058521 4183 scope.go:117] "RemoveContainer" containerID="e5878255f5e541fa4d169576071de072a25742be132fcad416fbf91f5f8ebad9" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.795340 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:05:59 crc kubenswrapper[4183]: I0813 20:05:59.911750 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.071854 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerStarted","Data":"6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619"} Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.076769 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676057 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:00 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:00 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:00 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676494 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.676601 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:06:00 crc kubenswrapper[4183]: I0813 20:06:00.711960 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" podStartSLOduration=3.711887332 podStartE2EDuration="3.711887332s" podCreationTimestamp="2025-08-13 20:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:06:00.10385754 +0000 UTC m=+1326.796522368" watchObservedRunningTime="2025-08-13 20:06:00.711887332 +0000 UTC m=+1327.404552310" Aug 13 20:06:04 crc kubenswrapper[4183]: I0813 20:06:04.845332 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:06:04 crc kubenswrapper[4183]: I0813 20:06:04.971234 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4jkp" Aug 13 20:06:05 crc kubenswrapper[4183]: I0813 20:06:05.676342 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:05 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:05 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:05 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:05 crc kubenswrapper[4183]: I0813 20:06:05.676435 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.907656 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.913074 4183 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:06 crc kubenswrapper[4183]: I0813 20:06:06.994135 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-console/console-5d9678894c-wx62n" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" containerID="cri-o://1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" gracePeriod=15 Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.146170 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147353 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/0.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147427 4183 generic.go:334] "Generic (PLEG): container finished" podID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerID="1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" exitCode=2 Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b"} Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.147512 4183 scope.go:117] "RemoveContainer" containerID="bc9bc2d351deda360fe2c9a8ea52b6167467896e22b28bcf9fdb33f8155b79ba" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.475603 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.475695 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.528768 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529095 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529400 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.529551 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.530391 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.530572 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.531014 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") pod \"384ed0e8-86e4-42df-bd2c-604c1f536a15\" (UID: \"384ed0e8-86e4-42df-bd2c-604c1f536a15\") " Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548624 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config" (OuterVolumeSpecName: "console-config") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548824 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.548848 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.549462 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca" (OuterVolumeSpecName: "service-ca") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.554526 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.555144 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.555501 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b" (OuterVolumeSpecName: "kube-api-access-hjq9b") pod "384ed0e8-86e4-42df-bd2c-604c1f536a15" (UID: "384ed0e8-86e4-42df-bd2c-604c1f536a15"). InnerVolumeSpecName "kube-api-access-hjq9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633186 4183 reconciler_common.go:300] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-oauth-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633267 4183 reconciler_common.go:300] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-service-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633282 4183 reconciler_common.go:300] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633293 4183 reconciler_common.go:300] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/384ed0e8-86e4-42df-bd2c-604c1f536a15-console-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633306 4183 reconciler_common.go:300] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633316 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hjq9b\" (UniqueName: \"kubernetes.io/projected/384ed0e8-86e4-42df-bd2c-604c1f536a15-kube-api-access-hjq9b\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:07 crc kubenswrapper[4183]: I0813 20:06:07.633327 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/384ed0e8-86e4-42df-bd2c-604c1f536a15-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155627 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9678894c-wx62n_384ed0e8-86e4-42df-bd2c-604c1f536a15/console/1.log" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155961 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9678894c-wx62n" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.155971 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9678894c-wx62n" event={"ID":"384ed0e8-86e4-42df-bd2c-604c1f536a15","Type":"ContainerDied","Data":"612e7824c92f4db329dd14ca96f855eb9f361591c35855b089640224677bf2f7"} Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.156053 4183 scope.go:117] "RemoveContainer" containerID="1ce82b64b98820f650cc613d542e0f0643d32ba3d661ee198711362ba0c99f8b" Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.264684 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:06:08 crc kubenswrapper[4183]: I0813 20:06:08.270602 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d9678894c-wx62n"] Aug 13 20:06:09 crc kubenswrapper[4183]: I0813 20:06:09.219349 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" path="/var/lib/kubelet/pods/384ed0e8-86e4-42df-bd2c-604c1f536a15/volumes" Aug 13 20:06:10 crc kubenswrapper[4183]: I0813 20:06:10.675650 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]log ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]etcd-readiness ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]informer-sync ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:06:10 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:06:10 crc kubenswrapper[4183]: [-]shutdown failed: reason withheld Aug 13 20:06:10 crc kubenswrapper[4183]: readyz check failed Aug 13 20:06:10 crc kubenswrapper[4183]: I0813 20:06:10.676308 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:06:14 crc kubenswrapper[4183]: I0813 20:06:14.718261 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:15 crc kubenswrapper[4183]: I0813 20:06:15.666176 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:15 crc kubenswrapper[4183]: I0813 20:06:15.666751 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:20 crc kubenswrapper[4183]: I0813 20:06:20.666389 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:20 crc kubenswrapper[4183]: I0813 20:06:20.666979 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:25 crc kubenswrapper[4183]: I0813 20:06:25.666823 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:25 crc kubenswrapper[4183]: I0813 20:06:25.667491 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.666322 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.667066 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.704832 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:30 crc kubenswrapper[4183]: I0813 20:06:30.705725 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rmwfn" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" containerID="cri-o://2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" gracePeriod=2 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.291244 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336637 4183 generic.go:334] "Generic (PLEG): container finished" podID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" exitCode=0 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336726 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336770 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmwfn" event={"ID":"9ad279b4-d9dc-42a8-a1c8-a002bd063482","Type":"ContainerDied","Data":"9218677c9aa0f218ae58b4990048c486cef74452f639e5a303ac08e79a2c31d7"} Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336890 4183 scope.go:117] "RemoveContainer" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.336854 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmwfn" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.399059 4183 scope.go:117] "RemoveContainer" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.400918 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.401034 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.401135 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") pod \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\" (UID: \"9ad279b4-d9dc-42a8-a1c8-a002bd063482\") " Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.407107 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities" (OuterVolumeSpecName: "utilities") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.418403 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.418835 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dcqzh" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" containerID="cri-o://a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" gracePeriod=2 Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.460514 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp" (OuterVolumeSpecName: "kube-api-access-r7dbp") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "kube-api-access-r7dbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.506106 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.506186 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r7dbp\" (UniqueName: \"kubernetes.io/projected/9ad279b4-d9dc-42a8-a1c8-a002bd063482-kube-api-access-r7dbp\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.676153 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ad279b4-d9dc-42a8-a1c8-a002bd063482" (UID: "9ad279b4-d9dc-42a8-a1c8-a002bd063482"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.710297 4183 scope.go:117] "RemoveContainer" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.713096 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ad279b4-d9dc-42a8-a1c8-a002bd063482-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.865597 4183 scope.go:117] "RemoveContainer" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.866587 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": container with ID starting with 2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467 not found: ID does not exist" containerID="2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.866673 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467"} err="failed to get container status \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": rpc error: code = NotFound desc = could not find container \"2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467\": container with ID starting with 2b69a4a950514ff8d569afb43701fa230045e0687c1859975dc65fed5c5d7467 not found: ID does not exist" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.866689 4183 scope.go:117] "RemoveContainer" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.867610 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": container with ID starting with 5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a not found: ID does not exist" containerID="5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.867833 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a"} err="failed to get container status \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": rpc error: code = NotFound desc = could not find container \"5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a\": container with ID starting with 5dbac91dc644a8b25317c807e75f64e96be88bcfa9dc60fb2f4e72c80656206a not found: ID does not exist" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.867857 4183 scope.go:117] "RemoveContainer" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: E0813 20:06:31.868437 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": container with ID starting with 1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3 not found: ID does not exist" containerID="1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3" Aug 13 20:06:31 crc kubenswrapper[4183]: I0813 20:06:31.868469 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3"} err="failed to get container status \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": rpc error: code = NotFound desc = could not find container \"1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3\": container with ID starting with 1d3ccfcb0f390dfe83d5c073cc5942fd65993c97adb90156294ad82281a940f3 not found: ID does not exist" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.022861 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.079232 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmwfn"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.143688 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144333 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" containerID="cri-o://2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144370 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" containerID="cri-o://2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144341 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.144696 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" gracePeriod=30 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.149628 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150313 4183 topology_manager.go:215] "Topology Admit Handler" podUID="56d9256d8ee968b89d58cda59af60969" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150575 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150679 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150738 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150753 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150766 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150828 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150845 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150855 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150900 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150915 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150928 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150938 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150965 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150975 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.150986 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.150998 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151010 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-content" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151022 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-content" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151035 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151044 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151059 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151069 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151081 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-utilities" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151090 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="extract-utilities" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151384 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151408 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151419 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151430 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" containerName="registry-server" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151446 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-recovery-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151459 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151472 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151486 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151499 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151523 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151534 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151549 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager-cert-syncer" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151685 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151697 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="384ed0e8-86e4-42df-bd2c-604c1f536a15" containerName="console" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151714 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151723 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.151744 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.151755 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.154246 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="kube-controller-manager" Aug 13 20:06:32 crc kubenswrapper[4183]: E0813 20:06:32.154457 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.154473 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerName="cluster-policy-controller" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.220156 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.220710 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324255 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324653 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.324758 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.325074 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.377766 4183 generic.go:334] "Generic (PLEG): container finished" podID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerID="a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" exitCode=0 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.380354 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9"} Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.513021 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.565031 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.567986 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager-cert-syncer/0.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.585559 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.587046 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.587198 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.610520 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.613113 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k9qqb" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" containerID="cri-o://81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" gracePeriod=2 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628478 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") pod \"2eb2b200bca0d10cf0fe16fb7c0caf80\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628580 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628636 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") pod \"2eb2b200bca0d10cf0fe16fb7c0caf80\" (UID: \"2eb2b200bca0d10cf0fe16fb7c0caf80\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628668 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.628712 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") pod \"6db26b71-4e04-4688-a0c0-00e06e8c888d\" (UID: \"6db26b71-4e04-4688-a0c0-00e06e8c888d\") " Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.630710 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2eb2b200bca0d10cf0fe16fb7c0caf80" (UID: "2eb2b200bca0d10cf0fe16fb7c0caf80"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.631118 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2eb2b200bca0d10cf0fe16fb7c0caf80" (UID: "2eb2b200bca0d10cf0fe16fb7c0caf80"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.632228 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities" (OuterVolumeSpecName: "utilities") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.646752 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s" (OuterVolumeSpecName: "kube-api-access-nzb4s") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "kube-api-access-nzb4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746159 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746221 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2eb2b200bca0d10cf0fe16fb7c0caf80-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746236 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.746252 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nzb4s\" (UniqueName: \"kubernetes.io/projected/6db26b71-4e04-4688-a0c0-00e06e8c888d-kube-api-access-nzb4s\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.769860 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.770273 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g4v97" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" containerID="cri-o://844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" gracePeriod=2 Aug 13 20:06:32 crc kubenswrapper[4183]: I0813 20:06:32.808083 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.223896 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb2b200bca0d10cf0fe16fb7c0caf80" path="/var/lib/kubelet/pods/2eb2b200bca0d10cf0fe16fb7c0caf80/volumes" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.231017 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.237370 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad279b4-d9dc-42a8-a1c8-a002bd063482" path="/var/lib/kubelet/pods/9ad279b4-d9dc-42a8-a1c8-a002bd063482/volumes" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.386715 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.386913 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.387039 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") pod \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\" (UID: \"ccdf38cf-634a-41a2-9c8b-74bb86af80a7\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.389317 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities" (OuterVolumeSpecName: "utilities") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.406403 4183 generic.go:334] "Generic (PLEG): container finished" podID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerID="6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.407500 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerDied","Data":"6cc839079ff04a5b6cb4524dc6e36a89fd8caab9bf6a552eeffb557088851619"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.414144 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dcqzh" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.414560 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs" (OuterVolumeSpecName: "kube-api-access-n59fs") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "kube-api-access-n59fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.415194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dcqzh" event={"ID":"6db26b71-4e04-4688-a0c0-00e06e8c888d","Type":"ContainerDied","Data":"fd8d1d12d982e02597a295d2f3337ac4df705e6c16a1c44fe5fb982976562a45"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.415606 4183 scope.go:117] "RemoveContainer" containerID="a39a002d95a82ae963b46c8196dfed935c199e471be64946be7406b3b02562c9" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.447434 4183 generic.go:334] "Generic (PLEG): container finished" podID="bb917686-edfb-4158-86ad-6fce0abec64c" containerID="844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.448262 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482407 4183 generic.go:334] "Generic (PLEG): container finished" podID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482857 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k9qqb" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.482914 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.483860 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k9qqb" event={"ID":"ccdf38cf-634a-41a2-9c8b-74bb86af80a7","Type":"ContainerDied","Data":"ac543dfbb4577c159abff74fe63750ec6557d4198d6572a7497b3fc598fd6350"} Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.489756 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.490010 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n59fs\" (UniqueName: \"kubernetes.io/projected/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-kube-api-access-n59fs\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.501195 4183 scope.go:117] "RemoveContainer" containerID="5dfab3908e38ec4c78ee676439e402432e22c1d28963eb816627f094e1f7ffed" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.509593 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/cluster-policy-controller/5.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.538016 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager-cert-syncer/0.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548408 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_2eb2b200bca0d10cf0fe16fb7c0caf80/kube-controller-manager/0.log" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548477 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548491 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548506 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" exitCode=0 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.548557 4183 generic.go:334] "Generic (PLEG): container finished" podID="2eb2b200bca0d10cf0fe16fb7c0caf80" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" exitCode=2 Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.550728 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.605947 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.611004 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="2eb2b200bca0d10cf0fe16fb7c0caf80" podUID="56d9256d8ee968b89d58cda59af60969" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.651167 4183 scope.go:117] "RemoveContainer" containerID="d14340d88bbcb0bdafcdb676bdd527fc02a2314081fa0355609f2faf4fe6c57a" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699327 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699537 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.699654 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") pod \"bb917686-edfb-4158-86ad-6fce0abec64c\" (UID: \"bb917686-edfb-4158-86ad-6fce0abec64c\") " Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.703280 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities" (OuterVolumeSpecName: "utilities") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.713128 4183 scope.go:117] "RemoveContainer" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.715474 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr" (OuterVolumeSpecName: "kube-api-access-mwzcr") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "kube-api-access-mwzcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.766120 4183 scope.go:117] "RemoveContainer" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.809106 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mwzcr\" (UniqueName: \"kubernetes.io/projected/bb917686-edfb-4158-86ad-6fce0abec64c-kube-api-access-mwzcr\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.809204 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.876493 4183 scope.go:117] "RemoveContainer" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.951741 4183 scope.go:117] "RemoveContainer" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.956229 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": container with ID starting with 81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2 not found: ID does not exist" containerID="81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.956396 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2"} err="failed to get container status \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": rpc error: code = NotFound desc = could not find container \"81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2\": container with ID starting with 81cb681bd6d9448d71ccc777c84e85ec17d8973bb87b22b910458292232175d2 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.956556 4183 scope.go:117] "RemoveContainer" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.957238 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": container with ID starting with be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24 not found: ID does not exist" containerID="be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957296 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24"} err="failed to get container status \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": rpc error: code = NotFound desc = could not find container \"be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24\": container with ID starting with be5d91aad199c1c8bd5b2b79223d42aced870eea5f8ee3c624591deb82d9bd24 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957317 4183 scope.go:117] "RemoveContainer" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: E0813 20:06:33.957667 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": container with ID starting with aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101 not found: ID does not exist" containerID="aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957698 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101"} err="failed to get container status \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": rpc error: code = NotFound desc = could not find container \"aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101\": container with ID starting with aeb0e68fe787546cea2b489f1fad4768a18174f8e337cc1ad4994c7300f24101 not found: ID does not exist" Aug 13 20:06:33 crc kubenswrapper[4183]: I0813 20:06:33.957715 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.028438 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.113426 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.115441 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6db26b71-4e04-4688-a0c0-00e06e8c888d" (UID: "6db26b71-4e04-4688-a0c0-00e06e8c888d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.124953 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db26b71-4e04-4688-a0c0-00e06e8c888d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.127435 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb917686-edfb-4158-86ad-6fce0abec64c" (UID: "bb917686-edfb-4158-86ad-6fce0abec64c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.190249 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.226137 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb917686-edfb-4158-86ad-6fce0abec64c-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.230289 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.266904 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.267957 4183 topology_manager.go:215] "Topology Admit Handler" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" podNamespace="openshift-marketplace" podName="redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.268649 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269046 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269069 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269076 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269091 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269100 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269114 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269122 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269136 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269143 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269155 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269164 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269178 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269186 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269219 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269227 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-content" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.269237 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269244 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="extract-utilities" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269398 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269419 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.269428 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" containerName="registry-server" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.271124 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.302167 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.332213 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448725 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448842 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.448906 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.481760 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.515334 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dcqzh"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551308 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551391 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.551418 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.552235 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.553105 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.610158 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.625273 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"redhat-marketplace-4txfd\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.626101 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.626376 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.626490 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.631271 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631345 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631366 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.631658 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.641227 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.641315 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.641344 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.642564 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642589 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642599 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642761 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4v97" event={"ID":"bb917686-edfb-4158-86ad-6fce0abec64c","Type":"ContainerDied","Data":"2c30e71c46910d59824a916398858a98e2a14b68aeaa558e0e34e08a82403761"} Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.642974 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4v97" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.645946 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.646259 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.646347 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: E0813 20:06:34.650081 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.650302 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.650482 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.652664 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.653002 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.668983 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.669054 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.676139 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.676184 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.689053 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.689169 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.690944 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.691014 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.694191 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.694252 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.695225 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.695266 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.705911 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.705945 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.706983 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707016 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707643 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.707677 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.713412 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.713475 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.716474 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.716517 4183 scope.go:117] "RemoveContainer" containerID="2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.722234 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa"} err="failed to get container status \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": rpc error: code = NotFound desc = could not find container \"2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa\": container with ID starting with 2ff0ead9b839059a48cf26307a1e6357616626b76bccf46dce59cc73bb4f3faa not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.722283 4183 scope.go:117] "RemoveContainer" containerID="2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.733247 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a"} err="failed to get container status \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": rpc error: code = NotFound desc = could not find container \"2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a\": container with ID starting with 2ae58fccad4b322784e67af3bd8a0a758aef0d9c6a5be815a7411c4b418a3b2a not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.733349 4183 scope.go:117] "RemoveContainer" containerID="d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.739469 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccdf38cf-634a-41a2-9c8b-74bb86af80a7" (UID: "ccdf38cf-634a-41a2-9c8b-74bb86af80a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.741499 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc"} err="failed to get container status \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": rpc error: code = NotFound desc = could not find container \"d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc\": container with ID starting with d0b26dc9c88fe1590f9b795364005249431e6a3ef9c3a5b871ef81c58e943ffc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.741566 4183 scope.go:117] "RemoveContainer" containerID="8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.742463 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc"} err="failed to get container status \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": rpc error: code = NotFound desc = could not find container \"8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc\": container with ID starting with 8df10aba26f4dfda5e7e2437d5ea089e03045004bbaee6abae1caf0029224edc not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.742497 4183 scope.go:117] "RemoveContainer" containerID="ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.745275 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93"} err="failed to get container status \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": rpc error: code = NotFound desc = could not find container \"ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93\": container with ID starting with ed615504567ee8b0c6f5c11ee41aae373a4b5f7eb5d5bce46126d71588fdda93 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.745312 4183 scope.go:117] "RemoveContainer" containerID="28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.746895 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509"} err="failed to get container status \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": rpc error: code = NotFound desc = could not find container \"28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509\": container with ID starting with 28d066ff3fa55107fe65e1c05111431b6dfc8ff5884c7b80631d28140caa1509 not found: ID does not exist" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.746915 4183 scope.go:117] "RemoveContainer" containerID="844f180a492dff97326b5ea50f79dcbfc132e7edaccd1723d8997c38fb3bf568" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.767764 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.767926 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdf38cf-634a-41a2-9c8b-74bb86af80a7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:34 crc kubenswrapper[4183]: I0813 20:06:34.817313 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g4v97"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.004094 4183 scope.go:117] "RemoveContainer" containerID="c3dbff7f4c3117da13658584d3a507d50302df8be0d31802f8e4e5b93ddec694" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.109002 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.135918 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k9qqb"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.195435 4183 scope.go:117] "RemoveContainer" containerID="1e5547d2ec134d919f281661be1d8428aa473dba5709d51d784bbe4bf125231a" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.225423 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db26b71-4e04-4688-a0c0-00e06e8c888d" path="/var/lib/kubelet/pods/6db26b71-4e04-4688-a0c0-00e06e8c888d/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.228259 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb917686-edfb-4158-86ad-6fce0abec64c" path="/var/lib/kubelet/pods/bb917686-edfb-4158-86ad-6fce0abec64c/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.229735 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdf38cf-634a-41a2-9c8b-74bb86af80a7" path="/var/lib/kubelet/pods/ccdf38cf-634a-41a2-9c8b-74bb86af80a7/volumes" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.622105 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.666846 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.667018 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705030 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" event={"ID":"dc02677d-deed-4cc9-bb8c-0dd300f83655","Type":"ContainerDied","Data":"0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec"} Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705097 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d375f365a8fdeb2a6f8e132a388c08618e43492f2ffe32f450d914395120bec" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.705171 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.714641 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.714768 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.715053 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") pod \"dc02677d-deed-4cc9-bb8c-0dd300f83655\" (UID: \"dc02677d-deed-4cc9-bb8c-0dd300f83655\") " Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.716059 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock" (OuterVolumeSpecName: "var-lock") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.716115 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.739478 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.739660 4183 topology_manager.go:215] "Topology Admit Handler" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" podNamespace="openshift-marketplace" podName="certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.740078 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dc02677d-deed-4cc9-bb8c-0dd300f83655" (UID: "dc02677d-deed-4cc9-bb8c-0dd300f83655"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:06:35 crc kubenswrapper[4183]: E0813 20:06:35.752916 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.752975 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.753232 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" containerName="installer" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.754313 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.802645 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.816953 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817278 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817663 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817921 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817940 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc02677d-deed-4cc9-bb8c-0dd300f83655-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.817955 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc02677d-deed-4cc9-bb8c-0dd300f83655-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.919704 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.920273 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.920436 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.921238 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.921268 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.926700 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:06:35 crc kubenswrapper[4183]: I0813 20:06:35.967949 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"certified-operators-cfdk8\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.090066 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.638373 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:06:36 crc kubenswrapper[4183]: W0813 20:06:36.663759 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5391dc5d_0f00_4464_b617_b164e2f9b77a.slice/crio-93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d WatchSource:0}: Error finding container 93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d: Status 404 returned error can't find the container with id 93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.722331 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.722712 4183 topology_manager.go:215] "Topology Admit Handler" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" podNamespace="openshift-marketplace" podName="redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.724295 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733585 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733685 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.733727 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740443 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438" exitCode=0 Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740556 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.740590 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.744770 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d"} Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.834905 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.836955 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.837483 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.836767 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.837421 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.890610 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"redhat-operators-pmqwc\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:36 crc kubenswrapper[4183]: I0813 20:06:36.896240 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.151050 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.657129 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:06:37 crc kubenswrapper[4183]: W0813 20:06:37.678370 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e1b407b_80a9_40d6_aa0b_a5ffb555c8ed.slice/crio-3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8 WatchSource:0}: Error finding container 3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8: Status 404 returned error can't find the container with id 3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8 Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.752983 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8"} Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.755721 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215" exitCode=0 Aug 13 20:06:37 crc kubenswrapper[4183]: I0813 20:06:37.756002 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.342086 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.342230 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" podNamespace="openshift-marketplace" podName="community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.343500 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.393189 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460305 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460466 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.460712 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562320 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562455 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.562501 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.563335 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.563627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.624249 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"community-operators-p7svp\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.675174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.780855 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.785269 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" exitCode=0 Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.785411 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d"} Aug 13 20:06:38 crc kubenswrapper[4183]: I0813 20:06:38.796367 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679"} Aug 13 20:06:39 crc kubenswrapper[4183]: I0813 20:06:39.382481 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:06:39 crc kubenswrapper[4183]: I0813 20:06:39.811895 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7"} Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.666927 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.667427 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.822606 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" exitCode=0 Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.822832 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b"} Aug 13 20:06:40 crc kubenswrapper[4183]: I0813 20:06:40.827595 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} Aug 13 20:06:41 crc kubenswrapper[4183]: I0813 20:06:41.835751 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} Aug 13 20:06:45 crc kubenswrapper[4183]: I0813 20:06:45.666543 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:45 crc kubenswrapper[4183]: I0813 20:06:45.667135 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.209149 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.231273 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="df02f99a-b4f8-4711-aedf-964dcb4d3400" Aug 13 20:06:46 crc kubenswrapper[4183]: I0813 20:06:46.231314 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="df02f99a-b4f8-4711-aedf-964dcb4d3400" Aug 13 20:06:47 crc kubenswrapper[4183]: I0813 20:06:47.015557 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.218239 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.869394 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:49 crc kubenswrapper[4183]: I0813 20:06:49.913567 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.033314 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.668940 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.669135 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.717035 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.723046 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:06:50 crc kubenswrapper[4183]: I0813 20:06:50.910383 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866"} Aug 13 20:06:51 crc kubenswrapper[4183]: I0813 20:06:51.343841 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:06:51 crc kubenswrapper[4183]: I0813 20:06:51.919581 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98"} Aug 13 20:06:52 crc kubenswrapper[4183]: I0813 20:06:52.939619 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b"} Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.719310 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720070 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720141 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720171 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.720205 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Pending" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.847698 4183 scope.go:117] "RemoveContainer" containerID="3adbf9773c9dee772e1fae33ef3bfea1611715fe8502455203e764d46595a8bc" Aug 13 20:06:54 crc kubenswrapper[4183]: I0813 20:06:54.985710 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289"} Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.666286 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.666865 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:06:55 crc kubenswrapper[4183]: I0813 20:06:55.997314 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"56d9256d8ee968b89d58cda59af60969","Type":"ContainerStarted","Data":"844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202"} Aug 13 20:06:58 crc kubenswrapper[4183]: I0813 20:06:58.023164 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679" exitCode=0 Aug 13 20:06:58 crc kubenswrapper[4183]: I0813 20:06:58.023567 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679"} Aug 13 20:06:59 crc kubenswrapper[4183]: I0813 20:06:59.164298 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=9.164227237 podStartE2EDuration="9.164227237s" podCreationTimestamp="2025-08-13 20:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:06:56.384585892 +0000 UTC m=+1383.077250730" watchObservedRunningTime="2025-08-13 20:06:59.164227237 +0000 UTC m=+1385.856892155" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.040353 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerStarted","Data":"ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160"} Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.666357 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.667547 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.717568 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718035 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718195 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.718446 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.723382 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.760496 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:00 crc kubenswrapper[4183]: I0813 20:07:00.947442 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4txfd" podStartSLOduration=5.377419812 podStartE2EDuration="26.947382872s" podCreationTimestamp="2025-08-13 20:06:34 +0000 UTC" firstStartedPulling="2025-08-13 20:06:36.744736971 +0000 UTC m=+1363.437401649" lastFinishedPulling="2025-08-13 20:06:58.314699941 +0000 UTC m=+1385.007364709" observedRunningTime="2025-08-13 20:07:00.09942957 +0000 UTC m=+1386.792094548" watchObservedRunningTime="2025-08-13 20:07:00.947382872 +0000 UTC m=+1387.640047580" Aug 13 20:07:01 crc kubenswrapper[4183]: I0813 20:07:01.053138 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:02 crc kubenswrapper[4183]: I0813 20:07:02.062380 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.066363 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa" exitCode=0 Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.066554 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa"} Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.225319 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.225450 4183 topology_manager.go:215] "Topology Admit Handler" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" podNamespace="openshift-kube-apiserver" podName="installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.241292 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.252570 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.252718 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371516 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371593 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.371635 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473588 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473649 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473740 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.473926 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:03 crc kubenswrapper[4183]: I0813 20:07:03.474127 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.460102 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.535456 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"installer-11-crc\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.632665 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.633258 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.771343 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:04 crc kubenswrapper[4183]: I0813 20:07:04.907291 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.111193 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerStarted","Data":"d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80"} Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.183763 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cfdk8" podStartSLOduration=4.576405217 podStartE2EDuration="30.18370006s" podCreationTimestamp="2025-08-13 20:06:35 +0000 UTC" firstStartedPulling="2025-08-13 20:06:37.758363852 +0000 UTC m=+1364.451028550" lastFinishedPulling="2025-08-13 20:07:03.365658395 +0000 UTC m=+1390.058323393" observedRunningTime="2025-08-13 20:07:05.183269748 +0000 UTC m=+1391.875934756" watchObservedRunningTime="2025-08-13 20:07:05.18370006 +0000 UTC m=+1391.876364888" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.402368 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.588097 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:05 crc kubenswrapper[4183]: W0813 20:07:05.615964 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod47a054e4_19c2_4c12_a054_fc5edc98978a.slice/crio-82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763 WatchSource:0}: Error finding container 82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763: Status 404 returned error can't find the container with id 82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763 Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.667290 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:05 crc kubenswrapper[4183]: I0813 20:07:05.667378 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.091326 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.091412 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.136054 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerStarted","Data":"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763"} Aug 13 20:07:06 crc kubenswrapper[4183]: I0813 20:07:06.550982 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.151422 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4txfd" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" containerID="cri-o://ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" gracePeriod=2 Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.152121 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerStarted","Data":"1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c"} Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.231709 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-11-crc" podStartSLOduration=5.231646296 podStartE2EDuration="5.231646296s" podCreationTimestamp="2025-08-13 20:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:07.229267578 +0000 UTC m=+1393.921932296" watchObservedRunningTime="2025-08-13 20:07:07.231646296 +0000 UTC m=+1393.924311034" Aug 13 20:07:07 crc kubenswrapper[4183]: I0813 20:07:07.286308 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cfdk8" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:07 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:07 crc kubenswrapper[4183]: > Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.192452 4183 generic.go:334] "Generic (PLEG): container finished" podID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerID="ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" exitCode=0 Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.194124 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160"} Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.713376 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.890060 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.891033 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.891471 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") pod \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\" (UID: \"af6c965e-9dc8-417a-aa1c-303a50ec9adc\") " Aug 13 20:07:08 crc kubenswrapper[4183]: I0813 20:07:08.892132 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities" (OuterVolumeSpecName: "utilities") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.011540 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg" (OuterVolumeSpecName: "kube-api-access-ckbzg") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "kube-api-access-ckbzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.015756 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.015858 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ckbzg\" (UniqueName: \"kubernetes.io/projected/af6c965e-9dc8-417a-aa1c-303a50ec9adc-kube-api-access-ckbzg\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.212389 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4txfd" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.225379 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af6c965e-9dc8-417a-aa1c-303a50ec9adc" (UID: "af6c965e-9dc8-417a-aa1c-303a50ec9adc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.226151 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4txfd" event={"ID":"af6c965e-9dc8-417a-aa1c-303a50ec9adc","Type":"ContainerDied","Data":"0ac24e234dbea3fbef3137a45a6686f522b22807b700e39bf1183421025f953d"} Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.226223 4183 scope.go:117] "RemoveContainer" containerID="ff7f35679861a611a5ba4e3c78554ac68d5f4553adfb22336409ae2267a78160" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.320702 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af6c965e-9dc8-417a-aa1c-303a50ec9adc-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.376467 4183 scope.go:117] "RemoveContainer" containerID="35b65310d7cdfa6d3f8542bf95fcc97b0283ba68976893b228beafacea70e679" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.456132 4183 scope.go:117] "RemoveContainer" containerID="ba4e7e607991d317206ebde80c8cb2e26997cbbc08e8b4f17e61b221f795d438" Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.543745 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:09 crc kubenswrapper[4183]: I0813 20:07:09.571687 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4txfd"] Aug 13 20:07:10 crc kubenswrapper[4183]: I0813 20:07:10.667045 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:10 crc kubenswrapper[4183]: I0813 20:07:10.667532 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:11 crc kubenswrapper[4183]: I0813 20:07:11.218191 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" path="/var/lib/kubelet/pods/af6c965e-9dc8-417a-aa1c-303a50ec9adc/volumes" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.284216 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-67cbf64bc9-jjfds_b23d6435-6431-4905-b41b-a517327385e5/openshift-apiserver/3.log" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285762 4183 generic.go:334] "Generic (PLEG): container finished" podID="b23d6435-6431-4905-b41b-a517327385e5" containerID="b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" exitCode=0 Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285861 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a"} Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.285930 4183 scope.go:117] "RemoveContainer" containerID="df1d1d9a22e05cc0ee9c2836e149b57342e813e732ecae98f07e805dbee82ebb" Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.666054 4183 patch_prober.go:28] interesting pod/apiserver-67cbf64bc9-jjfds container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Aug 13 20:07:15 crc kubenswrapper[4183]: I0813 20:07:15.666198 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.70:8443/readyz\": dial tcp 10.217.0.70:8443: connect: connection refused" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.185655 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295187 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" event={"ID":"b23d6435-6431-4905-b41b-a517327385e5","Type":"ContainerDied","Data":"411add17e78de78ccd75f5c0e0dfb380e3bff9047da00adac5d17d33bfb78e58"} Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295262 4183 scope.go:117] "RemoveContainer" containerID="b03552e2b35c92b59eb334cf496ac9d89324ae268cf17ae601bd0d6a94df8289" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.295293 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67cbf64bc9-jjfds" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.302642 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.331041 4183 scope.go:117] "RemoveContainer" containerID="b7b2fb66a37e8c7191a914067fe2f9036112a584c9ca7714873849353733889a" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370123 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370703 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370929 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.370972 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371014 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371046 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371094 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371133 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371182 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371243 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371284 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") pod \"b23d6435-6431-4905-b41b-a517327385e5\" (UID: \"b23d6435-6431-4905-b41b-a517327385e5\") " Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371667 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371702 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371918 4183 reconciler_common.go:300] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-image-import-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.371945 4183 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.372972 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.380871 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.384032 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj" (OuterVolumeSpecName: "kube-api-access-6j2kj") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "kube-api-access-6j2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.395651 4183 scope.go:117] "RemoveContainer" containerID="ee7ad10446d56157471e17a6fd0a6c5ffb7cc6177a566dcf214a0b78b5502ef3" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.443578 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473163 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6j2kj\" (UniqueName: \"kubernetes.io/projected/b23d6435-6431-4905-b41b-a517327385e5-kube-api-access-6j2kj\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473231 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.473243 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23d6435-6431-4905-b41b-a517327385e5-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.514920 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.515325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config" (OuterVolumeSpecName: "config") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.520955 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574284 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574332 4183 reconciler_common.go:300] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.574348 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-etcd-client\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.616269 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit" (OuterVolumeSpecName: "audit") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.619083 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.675731 4183 reconciler_common.go:300] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-audit\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.675868 4183 reconciler_common.go:300] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b23d6435-6431-4905-b41b-a517327385e5-encryption-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.688930 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "b23d6435-6431-4905-b41b-a517327385e5" (UID: "b23d6435-6431-4905-b41b-a517327385e5"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:07:16 crc kubenswrapper[4183]: I0813 20:07:16.777555 4183 reconciler_common.go:300] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b23d6435-6431-4905-b41b-a517327385e5-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.332901 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.349174 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-67cbf64bc9-jjfds"] Aug 13 20:07:17 crc kubenswrapper[4183]: I0813 20:07:17.468404 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313383 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" exitCode=0 Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313692 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cfdk8" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" containerID="cri-o://d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" gracePeriod=2 Aug 13 20:07:18 crc kubenswrapper[4183]: I0813 20:07:18.313898 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.219654 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23d6435-6431-4905-b41b-a517327385e5" path="/var/lib/kubelet/pods/b23d6435-6431-4905-b41b-a517327385e5/volumes" Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.322545 4183 generic.go:334] "Generic (PLEG): container finished" podID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerID="d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" exitCode=0 Aug 13 20:07:19 crc kubenswrapper[4183]: I0813 20:07:19.322644 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.070461 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076068 4183 topology_manager.go:215] "Topology Admit Handler" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" podNamespace="openshift-apiserver" podName="apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076570 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="fix-audit-permissions" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076593 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="fix-audit-permissions" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076607 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076615 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076963 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.076984 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.076996 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077004 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077014 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-content" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077058 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-content" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077069 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-utilities" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077077 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="extract-utilities" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077085 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077093 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077107 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077117 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077129 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077136 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077147 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077156 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077310 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077325 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077335 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077345 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077358 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077382 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077392 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6c965e-9dc8-417a-aa1c-303a50ec9adc" containerName="registry-server" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077402 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077411 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077420 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077523 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077532 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.077547 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.077555 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.078031 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.078358 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.078375 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver" Aug 13 20:07:20 crc kubenswrapper[4183]: E0813 20:07:20.079939 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.079958 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23d6435-6431-4905-b41b-a517327385e5" containerName="openshift-apiserver-check-endpoints" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.090318 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.120717 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.143089 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.143954 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.144162 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.145585 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.152960 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.163645 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174554 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174703 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174746 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174820 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174860 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174926 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174956 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.174984 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175008 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175038 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.175065 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.179288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.179574 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.187850 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.188868 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.189288 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.265979 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276394 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276475 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276546 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") pod \"5391dc5d-0f00-4464-b617-b164e2f9b77a\" (UID: \"5391dc5d-0f00-4464-b617-b164e2f9b77a\") " Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276674 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276718 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276838 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276864 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276918 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276949 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.276991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277022 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277062 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277092 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.277137 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.278049 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.279247 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.281050 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.281554 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.288187 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities" (OuterVolumeSpecName: "utilities") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.290228 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.290477 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.294052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.327843 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.329297 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.334041 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w" (OuterVolumeSpecName: "kube-api-access-nqx8w") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "kube-api-access-nqx8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.339052 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.350518 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373138 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfdk8" event={"ID":"5391dc5d-0f00-4464-b617-b164e2f9b77a","Type":"ContainerDied","Data":"93c5c47bf133377eafcb9942e19796d3fe7fe2e004e4bf8e026b7ad2cfda695d"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373208 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfdk8" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.373223 4183 scope.go:117] "RemoveContainer" containerID="d4e66bdfd9dd4a7f2d135310d101ff9f0390135dfa3cce9fda943b1c05565a80" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.380660 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.380710 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nqx8w\" (UniqueName: \"kubernetes.io/projected/5391dc5d-0f00-4464-b617-b164e2f9b77a-kube-api-access-nqx8w\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.390558 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerStarted","Data":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.451122 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.503198 4183 scope.go:117] "RemoveContainer" containerID="8774ff62b19406788c10fedf068a0f954eca6a67f3db06bf9b50da1d5c7f38aa" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.539637 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p7svp" podStartSLOduration=4.757178704 podStartE2EDuration="42.539582856s" podCreationTimestamp="2025-08-13 20:06:38 +0000 UTC" firstStartedPulling="2025-08-13 20:06:40.825674156 +0000 UTC m=+1367.518338884" lastFinishedPulling="2025-08-13 20:07:18.608078248 +0000 UTC m=+1405.300743036" observedRunningTime="2025-08-13 20:07:20.539262247 +0000 UTC m=+1407.231927065" watchObservedRunningTime="2025-08-13 20:07:20.539582856 +0000 UTC m=+1407.232247584" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.665127 4183 scope.go:117] "RemoveContainer" containerID="d0410fb00ff1950c83008d849c88f9052caf868a3476a49f11cc841d25bf1215" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.767388 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5391dc5d-0f00-4464-b617-b164e2f9b77a" (UID: "5391dc5d-0f00-4464-b617-b164e2f9b77a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:20 crc kubenswrapper[4183]: I0813 20:07:20.790747 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5391dc5d-0f00-4464-b617-b164e2f9b77a-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.105498 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.120492 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cfdk8"] Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.218084 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" path="/var/lib/kubelet/pods/5391dc5d-0f00-4464-b617-b164e2f9b77a/volumes" Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.355501 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7fc54b8dd7-d2bhp"] Aug 13 20:07:21 crc kubenswrapper[4183]: W0813 20:07:21.374354 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41e8708a_e40d_4d28_846b_c52eda4d1755.slice/crio-2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8 WatchSource:0}: Error finding container 2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8: Status 404 returned error can't find the container with id 2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8 Aug 13 20:07:21 crc kubenswrapper[4183]: I0813 20:07:21.402828 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"2059a6e71652337fe2cdf8946abc3898c6e467e3863a7aa2b93b3528d16734f8"} Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.164391 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165017 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1784282a-268d-4e44-a766-43281414e2dc" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165221 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165237 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165257 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-content" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165266 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-content" Aug 13 20:07:22 crc kubenswrapper[4183]: E0813 20:07:22.165282 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-utilities" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165291 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="extract-utilities" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.165468 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="5391dc5d-0f00-4464-b617-b164e2f9b77a" containerName="registry-server" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.166174 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.170125 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-dl9g2" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.172343 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.201478 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.210239 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.210690 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.312677 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.314463 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.315166 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.390261 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.411919 4183 generic.go:334] "Generic (PLEG): container finished" podID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerID="58037de88507ed248b3008018dedcd37e5ffaf512da1efdad96531a3c165ed1d" exitCode=0 Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.412028 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerDied","Data":"58037de88507ed248b3008018dedcd37e5ffaf512da1efdad96531a3c165ed1d"} Aug 13 20:07:22 crc kubenswrapper[4183]: I0813 20:07:22.499614 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.031373 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.032141 4183 topology_manager.go:215] "Topology Admit Handler" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" podNamespace="openshift-kube-scheduler" podName="installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.033275 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.063699 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-9ln8g" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.064197 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.127986 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137526 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137624 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.137673 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239627 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239817 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.239944 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.240035 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.318300 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"installer-8-crc\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.354371 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.432220 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"ee9b6eb9461a74aad78cf9091cb08ce2922ebd34495ef62c73d64b9e4a16fd71"} Aug 13 20:07:23 crc kubenswrapper[4183]: I0813 20:07:23.506287 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.097175 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: W0813 20:07:24.115985 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaca1f9ff_a685_4a78_b461_3931b757f754.slice/crio-d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056 WatchSource:0}: Error finding container d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056: Status 404 returned error can't find the container with id d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056 Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.337192 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.337768 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" podNamespace="openshift-kube-controller-manager" podName="installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.338997 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463611 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463699 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.463837 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.476437 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"907e380361ba3b0228dd34236f32c08de85ddb289bd11f2a1c6bc95e5042248f"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.484451 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.488919 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerStarted","Data":"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.498696 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerStarted","Data":"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448"} Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.564857 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.565013 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.565046 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.566492 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.567348 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.700714 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"installer-11-crc\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.702078 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podStartSLOduration=87.702000825 podStartE2EDuration="1m27.702000825s" podCreationTimestamp="2025-08-13 20:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:24.689446405 +0000 UTC m=+1411.382111213" watchObservedRunningTime="2025-08-13 20:07:24.702000825 +0000 UTC m=+1411.394665613" Aug 13 20:07:24 crc kubenswrapper[4183]: I0813 20:07:24.963169 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.452551 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.453223 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.522573 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerStarted","Data":"f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2"} Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.527492 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerStarted","Data":"5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef"} Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.561588 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-8-crc" podStartSLOduration=3.561536929 podStartE2EDuration="3.561536929s" podCreationTimestamp="2025-08-13 20:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:25.553178059 +0000 UTC m=+1412.245842817" watchObservedRunningTime="2025-08-13 20:07:25.561536929 +0000 UTC m=+1412.254201967" Aug 13 20:07:25 crc kubenswrapper[4183]: I0813 20:07:25.625133 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-11-crc" podStartSLOduration=3.62507817 podStartE2EDuration="3.62507817s" podCreationTimestamp="2025-08-13 20:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:25.606249501 +0000 UTC m=+1412.298914199" watchObservedRunningTime="2025-08-13 20:07:25.62507817 +0000 UTC m=+1412.317742888" Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.189841 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.548853 4183 generic.go:334] "Generic (PLEG): container finished" podID="1784282a-268d-4e44-a766-43281414e2dc" containerID="5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef" exitCode=0 Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.549013 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerDied","Data":"5d491b38e707472af1834693c9fb2878d530381f767e9605a1f4536f559018ef"} Aug 13 20:07:26 crc kubenswrapper[4183]: I0813 20:07:26.552214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerStarted","Data":"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31"} Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.561049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerStarted","Data":"0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6"} Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.608386 4183 patch_prober.go:28] interesting pod/apiserver-7fc54b8dd7-d2bhp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]log ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]etcd ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/generic-apiserver-start-informers ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/max-in-flight-filter ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/storage-object-count-tracker-hook ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/image.openshift.io-apiserver-caches ok Aug 13 20:07:27 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Aug 13 20:07:27 crc kubenswrapper[4183]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectcache ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-startinformers ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/openshift.io-restmapperupdater ok Aug 13 20:07:27 crc kubenswrapper[4183]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Aug 13 20:07:27 crc kubenswrapper[4183]: healthz check failed Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.608501 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Aug 13 20:07:27 crc kubenswrapper[4183]: I0813 20:07:27.610608 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-11-crc" podStartSLOduration=3.610560436 podStartE2EDuration="3.610560436s" podCreationTimestamp="2025-08-13 20:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:27.606207552 +0000 UTC m=+1414.298872320" watchObservedRunningTime="2025-08-13 20:07:27.610560436 +0000 UTC m=+1414.303225224" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.081528 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181422 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") pod \"1784282a-268d-4e44-a766-43281414e2dc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181506 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") pod \"1784282a-268d-4e44-a766-43281414e2dc\" (UID: \"1784282a-268d-4e44-a766-43281414e2dc\") " Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.181844 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1784282a-268d-4e44-a766-43281414e2dc" (UID: "1784282a-268d-4e44-a766-43281414e2dc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.192577 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1784282a-268d-4e44-a766-43281414e2dc" (UID: "1784282a-268d-4e44-a766-43281414e2dc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.282391 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1784282a-268d-4e44-a766-43281414e2dc-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.282458 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1784282a-268d-4e44-a766-43281414e2dc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571373 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"1784282a-268d-4e44-a766-43281414e2dc","Type":"ContainerDied","Data":"a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448"} Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571444 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.571490 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a480fccd2debaafb2ae0e571464b52a743bd9b9bd88124f3ec23ac1917ea0448" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.675683 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:28 crc kubenswrapper[4183]: I0813 20:07:28.675947 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.055307 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p7svp" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:30 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:30 crc kubenswrapper[4183]: > Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.476521 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.489692 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.785087 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.794980 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/1.log" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796348 4183 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" exitCode=1 Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796429 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44"} Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.796711 4183 scope.go:117] "RemoveContainer" containerID="5591be2de8956909e600e69f97a9f842da06662ddb70dc80595c060706c1d24b" Aug 13 20:07:30 crc kubenswrapper[4183]: I0813 20:07:30.798757 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:30 crc kubenswrapper[4183]: E0813 20:07:30.802263 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.494135 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.496093 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-11-crc" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" containerID="cri-o://1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" gracePeriod=30 Aug 13 20:07:31 crc kubenswrapper[4183]: I0813 20:07:31.806205 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.900684 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.900870 4183 topology_manager.go:215] "Topology Admit Handler" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" podNamespace="openshift-kube-apiserver" podName="installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: E0813 20:07:33.901086 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901101 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901254 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="1784282a-268d-4e44-a766-43281414e2dc" containerName="pruner" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.901686 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.941547 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977020 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977103 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:33 crc kubenswrapper[4183]: I0813 20:07:33.977151 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078045 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078226 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078263 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078391 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.078512 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.108364 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"installer-12-crc\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.241523 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:07:34 crc kubenswrapper[4183]: I0813 20:07:34.910347 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Aug 13 20:07:34 crc kubenswrapper[4183]: W0813 20:07:34.931394 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3557248c_8f70_4165_aa66_8df983e7e01a.slice/crio-afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309 WatchSource:0}: Error finding container afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309: Status 404 returned error can't find the container with id afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309 Aug 13 20:07:35 crc kubenswrapper[4183]: I0813 20:07:35.846426 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerStarted","Data":"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309"} Aug 13 20:07:36 crc kubenswrapper[4183]: I0813 20:07:36.856537 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerStarted","Data":"6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a"} Aug 13 20:07:37 crc kubenswrapper[4183]: I0813 20:07:37.071385 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=4.071312054 podStartE2EDuration="4.071312054s" podCreationTimestamp="2025-08-13 20:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:07:37.058583339 +0000 UTC m=+1423.751248147" watchObservedRunningTime="2025-08-13 20:07:37.071312054 +0000 UTC m=+1423.763976852" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.884289 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888306 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888691 4183 generic.go:334] "Generic (PLEG): container finished" podID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerID="1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" exitCode=1 Aug 13 20:07:38 crc kubenswrapper[4183]: I0813 20:07:38.888738 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerDied","Data":"1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c"} Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.005603 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.899108 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" exitCode=0 Aug 13 20:07:39 crc kubenswrapper[4183]: I0813 20:07:39.899327 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.374439 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.374553 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480018 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480112 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480227 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") pod \"47a054e4-19c2-4c12-a054-fc5edc98978a\" (UID: \"47a054e4-19c2-4c12-a054-fc5edc98978a\") " Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.480543 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock" (OuterVolumeSpecName: "var-lock") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.481650 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.498477 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "47a054e4-19c2-4c12-a054-fc5edc98978a" (UID: "47a054e4-19c2-4c12-a054-fc5edc98978a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.535472 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581704 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581765 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47a054e4-19c2-4c12-a054-fc5edc98978a-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.581835 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47a054e4-19c2-4c12-a054-fc5edc98978a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929182 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-11-crc_47a054e4-19c2-4c12-a054-fc5edc98978a/installer/0.log" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929511 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p7svp" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" containerID="cri-o://346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" gracePeriod=2 Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.929634 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.931381 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"47a054e4-19c2-4c12-a054-fc5edc98978a","Type":"ContainerDied","Data":"82592d624297fddcd6792981a2d03476ea0c73592b9982be03e42a7b6cfda763"} Aug 13 20:07:40 crc kubenswrapper[4183]: I0813 20:07:40.931445 4183 scope.go:117] "RemoveContainer" containerID="1e1a0d662b883dd47a8d67de1ea3251e342574fa602e1c0b8d1d61ebcdfcfb0c" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.023616 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.038541 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.226148 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" path="/var/lib/kubelet/pods/47a054e4-19c2-4c12-a054-fc5edc98978a/volumes" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.536707 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.699273 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.699872 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.700154 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") pod \"8518239d-8dab-48ac-a3c1-e775566b9bff\" (UID: \"8518239d-8dab-48ac-a3c1-e775566b9bff\") " Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.701044 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities" (OuterVolumeSpecName: "utilities") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.706169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl" (OuterVolumeSpecName: "kube-api-access-vv6hl") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "kube-api-access-vv6hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.802685 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.803220 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vv6hl\" (UniqueName: \"kubernetes.io/projected/8518239d-8dab-48ac-a3c1-e775566b9bff-kube-api-access-vv6hl\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944462 4183 generic.go:334] "Generic (PLEG): container finished" podID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" exitCode=0 Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944597 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p7svp" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.944665 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.946142 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p7svp" event={"ID":"8518239d-8dab-48ac-a3c1-e775566b9bff","Type":"ContainerDied","Data":"4a52c9653485366a71b6816af21a11a7652981f948545698090cec0d47c008a7"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.946204 4183 scope.go:117] "RemoveContainer" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.953649 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerStarted","Data":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} Aug 13 20:07:41 crc kubenswrapper[4183]: I0813 20:07:41.981507 4183 scope.go:117] "RemoveContainer" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.052749 4183 scope.go:117] "RemoveContainer" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.152768 4183 scope.go:117] "RemoveContainer" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.154453 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": container with ID starting with 346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194 not found: ID does not exist" containerID="346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.154529 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194"} err="failed to get container status \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": rpc error: code = NotFound desc = could not find container \"346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194\": container with ID starting with 346c30b9a9faa8432b3782ba026d812f61ae2cf934cc3a5411eda085a0bf6194 not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.154541 4183 scope.go:117] "RemoveContainer" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.155376 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": container with ID starting with c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d not found: ID does not exist" containerID="c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.155404 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d"} err="failed to get container status \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": rpc error: code = NotFound desc = could not find container \"c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d\": container with ID starting with c8e3392d204770a3cdf4591df44d1933cb69dee9401552f91464c20b12ca2d0d not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.155414 4183 scope.go:117] "RemoveContainer" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: E0813 20:07:42.162089 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": container with ID starting with 75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b not found: ID does not exist" containerID="75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.162170 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b"} err="failed to get container status \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": rpc error: code = NotFound desc = could not find container \"75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b\": container with ID starting with 75cca3df20371dce976a94a74005beaf51017e82ce1c4f10505ef46633dcb26b not found: ID does not exist" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.363078 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pmqwc" podStartSLOduration=4.845765531 podStartE2EDuration="1m6.363011681s" podCreationTimestamp="2025-08-13 20:06:36 +0000 UTC" firstStartedPulling="2025-08-13 20:06:38.788419425 +0000 UTC m=+1365.481084033" lastFinishedPulling="2025-08-13 20:07:40.305665565 +0000 UTC m=+1426.998330183" observedRunningTime="2025-08-13 20:07:42.355966279 +0000 UTC m=+1429.048631407" watchObservedRunningTime="2025-08-13 20:07:42.363011681 +0000 UTC m=+1429.055676399" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.473599 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8518239d-8dab-48ac-a3c1-e775566b9bff" (UID: "8518239d-8dab-48ac-a3c1-e775566b9bff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.527765 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8518239d-8dab-48ac-a3c1-e775566b9bff-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.615264 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:42 crc kubenswrapper[4183]: I0813 20:07:42.643988 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p7svp"] Aug 13 20:07:43 crc kubenswrapper[4183]: I0813 20:07:43.217590 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" path="/var/lib/kubelet/pods/8518239d-8dab-48ac-a3c1-e775566b9bff/volumes" Aug 13 20:07:45 crc kubenswrapper[4183]: I0813 20:07:45.212168 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:45 crc kubenswrapper[4183]: E0813 20:07:45.212932 4183 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-7d46d5bb6d-rrg6t_openshift-ingress-operator(7d51f445-054a-4e4f-a67b-a828f5a32511)\"" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Aug 13 20:07:47 crc kubenswrapper[4183]: I0813 20:07:47.152606 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:47 crc kubenswrapper[4183]: I0813 20:07:47.153146 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:48 crc kubenswrapper[4183]: I0813 20:07:48.274609 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pmqwc" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" probeResult="failure" output=< Aug 13 20:07:48 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:07:48 crc kubenswrapper[4183]: > Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.746623 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747374 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747426 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747463 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:07:54 crc kubenswrapper[4183]: I0813 20:07:54.747494 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.327978 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.333721 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.336866 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" containerID="cri-o://5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.337094 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" containerID="cri-o://da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.337181 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" containerID="cri-o://daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" gracePeriod=30 Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346086 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346238 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346406 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="wait-for-host-port" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346436 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="wait-for-host-port" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346453 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346461 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346471 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-utilities" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346479 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-utilities" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346492 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346498 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346511 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346519 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346529 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346535 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346547 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-content" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346554 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="extract-content" Aug 13 20:07:57 crc kubenswrapper[4183]: E0813 20:07:57.346565 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346574 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346714 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-cert-syncer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346729 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a054e4-19c2-4c12-a054-fc5edc98978a" containerName="installer" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346740 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346756 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b2a8634cfe8a21cffcc98cc8c87160" containerName="kube-scheduler-recovery-controller" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.346765 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8518239d-8dab-48ac-a3c1-e775566b9bff" containerName="registry-server" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.447443 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.447855 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.548995 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549096 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549212 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.549286 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.582463 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.602443 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_92b2a8634cfe8a21cffcc98cc8c87160/kube-scheduler-cert-syncer/0.log" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.604392 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.624543 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.664649 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751139 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") pod \"92b2a8634cfe8a21cffcc98cc8c87160\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751244 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") pod \"92b2a8634cfe8a21cffcc98cc8c87160\" (UID: \"92b2a8634cfe8a21cffcc98cc8c87160\") " Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751279 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "92b2a8634cfe8a21cffcc98cc8c87160" (UID: "92b2a8634cfe8a21cffcc98cc8c87160"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751451 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "92b2a8634cfe8a21cffcc98cc8c87160" (UID: "92b2a8634cfe8a21cffcc98cc8c87160"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.751558 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:57 crc kubenswrapper[4183]: I0813 20:07:57.853326 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/92b2a8634cfe8a21cffcc98cc8c87160-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.090766 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_92b2a8634cfe8a21cffcc98cc8c87160/kube-scheduler-cert-syncer/0.log" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094243 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094309 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" exitCode=2 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094315 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094332 4183 generic.go:334] "Generic (PLEG): container finished" podID="92b2a8634cfe8a21cffcc98cc8c87160" containerID="5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.094538 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3aeac3b3f0abd9616c32591e8c03ee04ad93d9eaa1a57f5f009d1e5534dc9bf" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.099010 4183 generic.go:334] "Generic (PLEG): container finished" podID="aca1f9ff-a685-4a78-b461-3931b757f754" containerID="f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2" exitCode=0 Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.099494 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerDied","Data":"f4f5bb6e58084ee7338acaefbb6a6dac0e4bc0801ff33d60707cf12512275cd2"} Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.100631 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:58 crc kubenswrapper[4183]: I0813 20:07:58.152190 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="92b2a8634cfe8a21cffcc98cc8c87160" podUID="6a57a7fb1944b43a6bd11a349520d301" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.105101 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pmqwc" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" containerID="cri-o://18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" gracePeriod=2 Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.209677 4183 scope.go:117] "RemoveContainer" containerID="200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.221052 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b2a8634cfe8a21cffcc98cc8c87160" path="/var/lib/kubelet/pods/92b2a8634cfe8a21cffcc98cc8c87160/volumes" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.553184 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.676586 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680046 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680156 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680224 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") pod \"aca1f9ff-a685-4a78-b461-3931b757f754\" (UID: \"aca1f9ff-a685-4a78-b461-3931b757f754\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680443 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.680477 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock" (OuterVolumeSpecName: "var-lock") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.689991 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aca1f9ff-a685-4a78-b461-3931b757f754" (UID: "aca1f9ff-a685-4a78-b461-3931b757f754"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781577 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781662 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.781847 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") pod \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\" (UID: \"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed\") " Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782093 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782114 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aca1f9ff-a685-4a78-b461-3931b757f754-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782133 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aca1f9ff-a685-4a78-b461-3931b757f754-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.782925 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities" (OuterVolumeSpecName: "utilities") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.789589 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78" (OuterVolumeSpecName: "kube-api-access-h4g78") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "kube-api-access-h4g78". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.883253 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:07:59 crc kubenswrapper[4183]: I0813 20:07:59.883325 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h4g78\" (UniqueName: \"kubernetes.io/projected/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-kube-api-access-h4g78\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.114082 4183 generic.go:334] "Generic (PLEG): container finished" podID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" exitCode=0 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.115157 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmqwc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.115204 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.116555 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmqwc" event={"ID":"0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed","Type":"ContainerDied","Data":"3025039c6358002d40f5661f0d4ebe701c314f685e0a46fd007206a116acffb8"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.116586 4183 scope.go:117] "RemoveContainer" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126548 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126932 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"aca1f9ff-a685-4a78-b461-3931b757f754","Type":"ContainerDied","Data":"d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.126988 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ba8aa29fc697e8bf02d629bbdd14aece0c6f0cdf3711bdd960f2de5046f056" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.130167 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/2.log" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.130727 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"2be75d1e514468ff600570e8a9d6f13a97a775a4d62bca4f69b639c8be59cf64"} Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.207987 4183 scope.go:117] "RemoveContainer" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.295514 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.320057 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.320538 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.320963 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-content" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321206 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-content" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321231 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321239 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321300 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321309 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321319 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321327 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321342 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-utilities" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321349 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="extract-utilities" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321360 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321367 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321379 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321385 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.321395 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321405 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321518 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321530 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321543 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" containerName="registry-server" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321554 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321564 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" containerName="installer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.321575 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326298 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" containerID="cri-o://4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326705 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326757 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.326866 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" containerID="cri-o://6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" gracePeriod=30 Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.395709 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.395815 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497385 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497494 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.497539 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.549594 4183 scope.go:117] "RemoveContainer" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.673146 4183 scope.go:117] "RemoveContainer" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.674149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": container with ID starting with 18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012 not found: ID does not exist" containerID="18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.674212 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012"} err="failed to get container status \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": rpc error: code = NotFound desc = could not find container \"18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012\": container with ID starting with 18ee63c59f6a1fec2a9a9cca96016647026294fd85d2b3d9bab846314db76012 not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.674225 4183 scope.go:117] "RemoveContainer" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.677462 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": container with ID starting with 89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1 not found: ID does not exist" containerID="89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.677521 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1"} err="failed to get container status \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": rpc error: code = NotFound desc = could not find container \"89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1\": container with ID starting with 89a368507993ea42c79b3af991cc9b1cccf950682066ea5091d608d27e68cbe1 not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.677535 4183 scope.go:117] "RemoveContainer" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: E0813 20:08:00.678622 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": container with ID starting with 29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d not found: ID does not exist" containerID="29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.678687 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d"} err="failed to get container status \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": rpc error: code = NotFound desc = could not find container \"29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d\": container with ID starting with 29c42b8a41289c4fea25430048589dc9dedd4b658b109126c4e196ce9807773d not found: ID does not exist" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718601 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718702 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.718973 4183 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" start-of-body= Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.719119 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="56d9256d8ee968b89d58cda59af60969" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": dial tcp 192.168.126.11:10357: connect: connection refused" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.737956 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_56d9256d8ee968b89d58cda59af60969/kube-controller-manager-cert-syncer/0.log" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.740496 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.749570 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.801739 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") pod \"56d9256d8ee968b89d58cda59af60969\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.801960 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") pod \"56d9256d8ee968b89d58cda59af60969\" (UID: \"56d9256d8ee968b89d58cda59af60969\") " Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.802251 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "56d9256d8ee968b89d58cda59af60969" (UID: "56d9256d8ee968b89d58cda59af60969"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.802286 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "56d9256d8ee968b89d58cda59af60969" (UID: "56d9256d8ee968b89d58cda59af60969"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.814840 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" (UID: "0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903427 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903510 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:00 crc kubenswrapper[4183]: I0813 20:08:00.903528 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/56d9256d8ee968b89d58cda59af60969-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.072465 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.084490 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pmqwc"] Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.142231 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_56d9256d8ee968b89d58cda59af60969/kube-controller-manager-cert-syncer/0.log" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144623 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144689 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" exitCode=2 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144712 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144729 4183 generic.go:334] "Generic (PLEG): container finished" podID="56d9256d8ee968b89d58cda59af60969" containerID="4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144739 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.144967 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.149350 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.150471 4183 generic.go:334] "Generic (PLEG): container finished" podID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerID="0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6" exitCode=0 Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.150531 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerDied","Data":"0028ed1d2f2b6b7f754d78a66fe28befb02bf632d29bbafaf101bd5630ca0ce6"} Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.272296 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="56d9256d8ee968b89d58cda59af60969" podUID="bd6a3a59e513625ca0ae3724df2686bc" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.307600 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed" path="/var/lib/kubelet/pods/0e1b407b-80a9-40d6-aa0b-a5ffb555c8ed/volumes" Aug 13 20:08:01 crc kubenswrapper[4183]: I0813 20:08:01.308471 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d9256d8ee968b89d58cda59af60969" path="/var/lib/kubelet/pods/56d9256d8ee968b89d58cda59af60969/volumes" Aug 13 20:08:01 crc kubenswrapper[4183]: E0813 20:08:01.370919 4183 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56d9256d8ee968b89d58cda59af60969.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56d9256d8ee968b89d58cda59af60969.slice/crio-a386295a4836609efa126cdad0f8da6cec9163b751ff142e15d9693c89cf9866\": RecentStats: unable to find data in memory cache]" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.701939 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726456 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726566 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726656 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") pod \"a45bfab9-f78b-4d72-b5b7-903e60401124\" (UID: \"a45bfab9-f78b-4d72-b5b7-903e60401124\") " Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726837 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock" (OuterVolumeSpecName: "var-lock") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.726907 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.727044 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.727061 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a45bfab9-f78b-4d72-b5b7-903e60401124-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.737672 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a45bfab9-f78b-4d72-b5b7-903e60401124" (UID: "a45bfab9-f78b-4d72-b5b7-903e60401124"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:08:02 crc kubenswrapper[4183]: I0813 20:08:02.828096 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a45bfab9-f78b-4d72-b5b7-903e60401124-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164692 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"a45bfab9-f78b-4d72-b5b7-903e60401124","Type":"ContainerDied","Data":"8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31"} Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164755 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0bbf4ce8e2b74d4c5a52712776bba9158d1913b3bd281fb7184ad1a80ceb31" Aug 13 20:08:03 crc kubenswrapper[4183]: I0813 20:08:03.164921 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.210374 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.233240 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.233318 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="1f93bc40-081c-4dbc-905a-acda15a1c6ce" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.254392 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.259540 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.267557 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.285068 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:08 crc kubenswrapper[4183]: I0813 20:08:08.294482 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207101 4183 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="ecc1c7aa8cb60b63c1dc3d6b8b1d65f58dad0f51d174f6d245650a3c918170f3" exitCode=0 Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207402 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"ecc1c7aa8cb60b63c1dc3d6b8b1d65f58dad0f51d174f6d245650a3c918170f3"} Aug 13 20:08:09 crc kubenswrapper[4183]: I0813 20:08:09.207460 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"7d38e4405721e751ffe695369180693433405ae4331549aed5834d79ed44b3ee"} Aug 13 20:08:10 crc kubenswrapper[4183]: I0813 20:08:10.242468 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"f484dd54fa6f1d9458704164d3b0d07e7de45fc1c5c3732080db88204b97a260"} Aug 13 20:08:10 crc kubenswrapper[4183]: I0813 20:08:10.242541 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"321449b7baef718aa4f8e6a5e8027626824e675a08ec111132c5033a8de2bea4"} Aug 13 20:08:11 crc kubenswrapper[4183]: I0813 20:08:11.251534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"748707f199ebf717d7b583f31dd21339f68d06a1f3fe2bd66ad8cd355863d0b6"} Aug 13 20:08:11 crc kubenswrapper[4183]: I0813 20:08:11.252067 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.208554 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.230189 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="953c24d8-ecc7-443c-a9ae-a3caf95e5e63" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.230240 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="953c24d8-ecc7-443c-a9ae-a3caf95e5e63" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.257216 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=4.2571685630000005 podStartE2EDuration="4.257168563s" podCreationTimestamp="2025-08-13 20:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:11.277452103 +0000 UTC m=+1457.970116921" watchObservedRunningTime="2025-08-13 20:08:12.257168563 +0000 UTC m=+1458.949833291" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.259925 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.268844 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.272823 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.292493 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:12 crc kubenswrapper[4183]: I0813 20:08:12.302328 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288033 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"0be6c231766bb308c5fd1c35f7d778e9085ef87b609e771c9b8c0562273f73af"} Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288425 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"2a5d2c4f8091434e96a501a9652a7fc6eabd91a48a80b63a8e598b375d046dcf"} Aug 13 20:08:13 crc kubenswrapper[4183]: I0813 20:08:13.288449 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"134690fa1c76729c58b7776be3ce993405e907d37bcd9895349f1550b9cb7b4e"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.298722 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"b3f81ba7d134155fdc498a60346928d213e2da7a3f20f0b50f64409568a246cc"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.298848 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"dd5de1da9d2aa603827fd445dd57c562cf58ea00258cc5b64a324701843c502b"} Aug 13 20:08:14 crc kubenswrapper[4183]: I0813 20:08:14.346705 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.34665693 podStartE2EDuration="2.34665693s" podCreationTimestamp="2025-08-13 20:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:14.341638536 +0000 UTC m=+1461.034303354" watchObservedRunningTime="2025-08-13 20:08:14.34665693 +0000 UTC m=+1461.039321658" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.293526 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.294368 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.298199 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.298330 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.299395 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.301153 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:22 crc kubenswrapper[4183]: I0813 20:08:22.369525 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.361444 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.769578 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.769759 4183 topology_manager.go:215] "Topology Admit Handler" podUID="7f47300841026200cf071984642de38e" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.770065 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770092 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770233 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" containerName="installer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770659 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.770874 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771150 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" containerID="cri-o://cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771208 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771215 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771239 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.771375 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" gracePeriod=15 Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772366 4183 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772453 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772611 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772625 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772647 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772655 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772665 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="setup" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772674 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="setup" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772684 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772692 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772704 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772712 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: E0813 20:08:23.772721 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772728 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772885 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772925 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-insecure-readyz" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772939 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-regeneration-controller" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772952 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-check-endpoints" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.772961 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="48128e8d38b5cbcd2691da698bd9cac3" containerName="kube-apiserver-cert-syncer" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852631 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852745 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852875 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852946 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.852979 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853006 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853028 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.853139 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.878338 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954727 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954844 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954931 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954966 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.954988 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955017 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955063 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955089 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955161 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955272 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955281 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955310 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955310 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955346 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:23 crc kubenswrapper[4183]: I0813 20:08:23.955367 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7f47300841026200cf071984642de38e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.174115 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:08:24 crc kubenswrapper[4183]: E0813 20:08:24.241628 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.372432 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7f47300841026200cf071984642de38e","Type":"ContainerStarted","Data":"887b3913b57be6cd6694b563992e615df63b28b24f279e51986fb9dfc689f5d5"} Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.390453 4183 generic.go:334] "Generic (PLEG): container finished" podID="3557248c-8f70-4165-aa66-8df983e7e01a" containerID="6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.390594 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerDied","Data":"6b580ba621276e10a232c15451ffaeddf32ec7044f6dad05aaf5e3b8fd52877a"} Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.395765 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.397652 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.399281 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.414309 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416055 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416100 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416115 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" exitCode=0 Aug 13 20:08:24 crc kubenswrapper[4183]: I0813 20:08:24.416127 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" exitCode=2 Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.214399 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.216001 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.217007 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.440382 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.442184 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.436735 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7f47300841026200cf071984642de38e","Type":"ContainerStarted","Data":"92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89"} Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.886490 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.888411 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.889866 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.995965 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996063 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996135 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") pod \"3557248c-8f70-4165-aa66-8df983e7e01a\" (UID: \"3557248c-8f70-4165-aa66-8df983e7e01a\") " Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996285 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock" (OuterVolumeSpecName: "var-lock") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:25 crc kubenswrapper[4183]: I0813 20:08:25.996363 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.005385 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3557248c-8f70-4165-aa66-8df983e7e01a" (UID: "3557248c-8f70-4165-aa66-8df983e7e01a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.097962 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3557248c-8f70-4165-aa66-8df983e7e01a-kube-api-access\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.098312 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.098332 4183 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3557248c-8f70-4165-aa66-8df983e7e01a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.174745 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.178136 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.181246 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.182057 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.183114 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: E0813 20:08:26.183129 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445472 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445476 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"3557248c-8f70-4165-aa66-8df983e7e01a","Type":"ContainerDied","Data":"afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309"} Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.445574 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb6a839e21ef78ccbdf5a295971cba7dafad8761ac11e55edbab58d304e4309" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.449279 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.451519 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.478514 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.479931 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.858069 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.859873 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.862061 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.863006 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.863981 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920653 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920747 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920915 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920952 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") pod \"48128e8d38b5cbcd2691da698bd9cac3\" (UID: \"48128e8d38b5cbcd2691da698bd9cac3\") " Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.920982 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921140 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48128e8d38b5cbcd2691da698bd9cac3" (UID: "48128e8d38b5cbcd2691da698bd9cac3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921497 4183 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-cert-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921532 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:26 crc kubenswrapper[4183]: I0813 20:08:26.921543 4183 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48128e8d38b5cbcd2691da698bd9cac3-audit-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.218998 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48128e8d38b5cbcd2691da698bd9cac3" path="/var/lib/kubelet/pods/48128e8d38b5cbcd2691da698bd9cac3/volumes" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.458319 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_48128e8d38b5cbcd2691da698bd9cac3/kube-apiserver-cert-syncer/0.log" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459534 4183 generic.go:334] "Generic (PLEG): container finished" podID="48128e8d38b5cbcd2691da698bd9cac3" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" exitCode=0 Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459608 4183 scope.go:117] "RemoveContainer" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.459755 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.462241 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.464065 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.466914 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.468362 4183 status_manager.go:853] "Failed to get status for pod" podUID="48128e8d38b5cbcd2691da698bd9cac3" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.470527 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.471441 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.513125 4183 scope.go:117] "RemoveContainer" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.624083 4183 scope.go:117] "RemoveContainer" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.690658 4183 scope.go:117] "RemoveContainer" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.727822 4183 scope.go:117] "RemoveContainer" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.785051 4183 scope.go:117] "RemoveContainer" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.863453 4183 scope.go:117] "RemoveContainer" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.864654 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": container with ID starting with 6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9 not found: ID does not exist" containerID="6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.864760 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9"} err="failed to get container status \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": rpc error: code = NotFound desc = could not find container \"6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9\": container with ID starting with 6e4f959539810eaf11abed055957cc9d830327c14164adc78761f27b297f44b9 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.864855 4183 scope.go:117] "RemoveContainer" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.865988 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": container with ID starting with 8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83 not found: ID does not exist" containerID="8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866096 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83"} err="failed to get container status \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": rpc error: code = NotFound desc = could not find container \"8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83\": container with ID starting with 8bb841779401bd078d2cc708da9ac3cfd63491bf70c3a4f9e582b8786fa96b83 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866111 4183 scope.go:117] "RemoveContainer" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.866831 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": container with ID starting with 955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9 not found: ID does not exist" containerID="955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866880 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9"} err="failed to get container status \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": rpc error: code = NotFound desc = could not find container \"955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9\": container with ID starting with 955a586517e3a80d51e63d25ab6529e5a5465596e05a4fd7f9f0729d7998cbc9 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.866925 4183 scope.go:117] "RemoveContainer" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.868091 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": container with ID starting with bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343 not found: ID does not exist" containerID="bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.868222 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343"} err="failed to get container status \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": rpc error: code = NotFound desc = could not find container \"bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343\": container with ID starting with bb37d165f1c10d3b09fbe44a52f35b204201086505dc6f64b89245df7312c343 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.868252 4183 scope.go:117] "RemoveContainer" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.869097 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": container with ID starting with cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12 not found: ID does not exist" containerID="cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.869152 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12"} err="failed to get container status \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": rpc error: code = NotFound desc = could not find container \"cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12\": container with ID starting with cc3b998787ca6834bc0a8e76f29b082be5c1e343717bbe7707559989e9554f12 not found: ID does not exist" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.869166 4183 scope.go:117] "RemoveContainer" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: E0813 20:08:27.870079 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": container with ID starting with c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba not found: ID does not exist" containerID="c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba" Aug 13 20:08:27 crc kubenswrapper[4183]: I0813 20:08:27.870130 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba"} err="failed to get container status \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": rpc error: code = NotFound desc = could not find container \"c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba\": container with ID starting with c71c0072a7c08ea4ae494694be88f8491b485a84b46f62cedff5223a7c75b5ba not found: ID does not exist" Aug 13 20:08:28 crc kubenswrapper[4183]: E0813 20:08:28.434605 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.410013 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.412321 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.413478 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.414387 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.415398 4183 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:32 crc kubenswrapper[4183]: I0813 20:08:32.422569 4183 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.424377 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="200ms" Aug 13 20:08:32 crc kubenswrapper[4183]: E0813 20:08:32.626301 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="400ms" Aug 13 20:08:33 crc kubenswrapper[4183]: E0813 20:08:33.028474 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="800ms" Aug 13 20:08:33 crc kubenswrapper[4183]: E0813 20:08:33.830041 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="1.6s" Aug 13 20:08:35 crc kubenswrapper[4183]: I0813 20:08:35.213617 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:35 crc kubenswrapper[4183]: I0813 20:08:35.215381 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:35 crc kubenswrapper[4183]: E0813 20:08:35.431177 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="3.2s" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.521459 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.523202 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.524232 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.525871 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.526512 4183 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:36 crc kubenswrapper[4183]: E0813 20:08:36.526527 4183 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.209360 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.211765 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.212614 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.231367 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.231761 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:37 crc kubenswrapper[4183]: E0813 20:08:37.233020 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.233654 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:37 crc kubenswrapper[4183]: I0813 20:08:37.538540 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"302d89cfbab2c80a69d727fd8c30e727ff36453533105813906fa746343277a0"} Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.437606 4183 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.130.11:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.185b6c6f19d3379d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7f47300841026200cf071984642de38e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,LastTimestamp:2025-08-13 20:08:24.221382557 +0000 UTC m=+1470.914047315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546455 4183 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" exitCode=0 Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546519 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0"} Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546956 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.546972 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.548383 4183 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.551440 4183 status_manager.go:853] "Failed to get status for pod" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.553221 4183 status_manager.go:853] "Failed to get status for pod" podUID="7f47300841026200cf071984642de38e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: I0813 20:08:38.554631 4183 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.130.11:6443: connect: connection refused" Aug 13 20:08:38 crc kubenswrapper[4183]: E0813 20:08:38.633940 4183 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.130.11:6443: connect: connection refused" interval="6.4s" Aug 13 20:08:39 crc kubenswrapper[4183]: I0813 20:08:39.559148 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282"} Aug 13 20:08:39 crc kubenswrapper[4183]: I0813 20:08:39.559214 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807"} Aug 13 20:08:40 crc kubenswrapper[4183]: I0813 20:08:40.599184 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3"} Aug 13 20:08:40 crc kubenswrapper[4183]: I0813 20:08:40.599535 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078"} Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611076 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333"} Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611749 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.611849 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:41 crc kubenswrapper[4183]: I0813 20:08:41.612213 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.234267 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.234736 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.342162 4183 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Aug 13 20:08:42 crc kubenswrapper[4183]: I0813 20:08:42.342428 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.273716 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.471929 4183 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.525141 4183 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53c20181-da08-4c94-91d7-6f71a843fa75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:38Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-08-13T20:08:40Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:08:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T20:08:37Z\\\"}}}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"53c20181-da08-4c94-91d7-6f71a843fa75\": field is immutable" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.593733 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.653927 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.653970 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.665200 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:08:47 crc kubenswrapper[4183]: I0813 20:08:47.671109 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:48 crc kubenswrapper[4183]: I0813 20:08:48.660687 4183 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:48 crc kubenswrapper[4183]: I0813 20:08:48.660738 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="53c20181-da08-4c94-91d7-6f71a843fa75" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748075 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748960 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.748992 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749206 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749313 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:08:54 crc kubenswrapper[4183]: I0813 20:08:54.749414 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:08:55 crc kubenswrapper[4183]: I0813 20:08:55.227202 4183 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.627330 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.631933 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Aug 13 20:08:57 crc kubenswrapper[4183]: I0813 20:08:57.982066 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.147301 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.293535 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.296700 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.461026 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Aug 13 20:08:58 crc kubenswrapper[4183]: I0813 20:08:58.601848 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.117265 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.177676 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.254728 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.262980 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.335459 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.630933 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.789658 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.845263 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Aug 13 20:08:59 crc kubenswrapper[4183]: I0813 20:08:59.903631 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.057338 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.074697 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.110668 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.303377 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.360247 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.464834 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.489071 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.607957 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.720412 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.780720 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.784394 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.795747 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.862674 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.940179 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Aug 13 20:09:00 crc kubenswrapper[4183]: I0813 20:09:00.956659 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.085377 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.178096 4183 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.328063 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.447104 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.476288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.547427 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.641589 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.665206 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.676310 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.681567 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.692079 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.769757 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.785259 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.957170 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Aug 13 20:09:01 crc kubenswrapper[4183]: I0813 20:09:01.977180 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.081278 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.096022 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.099320 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.378915 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.386933 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.493464 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.498007 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.511713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.686008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.695292 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Aug 13 20:09:02 crc kubenswrapper[4183]: I0813 20:09:02.961043 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.031525 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.102611 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.110397 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.141717 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.320726 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.446960 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.478887 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.509574 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.607414 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.648203 4183 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.774962 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.947576 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.993438 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Aug 13 20:09:03 crc kubenswrapper[4183]: I0813 20:09:03.998076 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.033861 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.037003 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.042158 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.068241 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.081452 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.101661 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.189515 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.265058 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.324465 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.326161 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.543695 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.547105 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.572449 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.598540 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.654289 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.672610 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.717240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.822302 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:09:04 crc kubenswrapper[4183]: I0813 20:09:04.968089 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.057616 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.199184 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.244267 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.296634 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.313920 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.472644 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.481972 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.506429 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.556529 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.669561 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.695473 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.866327 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.914427 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Aug 13 20:09:05 crc kubenswrapper[4183]: I0813 20:09:05.977991 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.000600 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.010262 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.018669 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.055596 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.095466 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.112337 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.114240 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.126649 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.308156 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.309407 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.369216 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.518110 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.585833 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.595313 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.778450 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.831825 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.850352 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:09:06 crc kubenswrapper[4183]: I0813 20:09:06.962435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.157179 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.180116 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.221351 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.250856 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.257683 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.279858 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.280641 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.301944 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.371653 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.376765 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.558063 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.609699 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.620979 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.644389 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.671435 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.696221 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.869656 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.871617 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.884152 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Aug 13 20:09:07 crc kubenswrapper[4183]: I0813 20:09:07.902953 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.098194 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.125093 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.177401 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.363241 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.532440 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.672480 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.699313 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.700878 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.705558 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.782818 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.783315 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.858137 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.868186 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Aug 13 20:09:08 crc kubenswrapper[4183]: I0813 20:09:08.999092 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.148008 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.199442 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.265032 4183 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.405863 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.430381 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.460881 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.505573 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.664845 4183 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.780304 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.924032 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Aug 13 20:09:09 crc kubenswrapper[4183]: I0813 20:09:09.937226 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.072708 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.134052 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.164281 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.227498 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.276419 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.288036 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.370724 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.456064 4183 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.457612 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.458203 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=47.458141811 podStartE2EDuration="47.458141811s" podCreationTimestamp="2025-08-13 20:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:08:47.588553361 +0000 UTC m=+1494.281218409" watchObservedRunningTime="2025-08-13 20:09:10.458141811 +0000 UTC m=+1517.150806510" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.462790 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.462937 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.481349 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.495878 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.498050 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.506394 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.516937 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.516769112 podStartE2EDuration="23.516769112s" podCreationTimestamp="2025-08-13 20:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:09:10.508199597 +0000 UTC m=+1517.200864395" watchObservedRunningTime="2025-08-13 20:09:10.516769112 +0000 UTC m=+1517.209433890" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.610135 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.712759 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.743313 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.840994 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Aug 13 20:09:10 crc kubenswrapper[4183]: I0813 20:09:10.942279 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.032092 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.093276 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.243481 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.289761 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.342288 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.384979 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.572094 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Aug 13 20:09:11 crc kubenswrapper[4183]: I0813 20:09:11.624107 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.101727 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.141251 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.263078 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.362504 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.444336 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.801094 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Aug 13 20:09:12 crc kubenswrapper[4183]: I0813 20:09:12.813525 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.016540 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.393057 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.499447 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.526685 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.600389 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.632243 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.857723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Aug 13 20:09:13 crc kubenswrapper[4183]: I0813 20:09:13.992095 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Aug 13 20:09:21 crc kubenswrapper[4183]: I0813 20:09:21.399619 4183 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:21 crc kubenswrapper[4183]: I0813 20:09:21.401000 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" containerID="cri-o://92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" gracePeriod=5 Aug 13 20:09:26 crc kubenswrapper[4183]: I0813 20:09:26.975279 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:26 crc kubenswrapper[4183]: I0813 20:09:26.975935 4183 generic.go:334] "Generic (PLEG): container finished" podID="7f47300841026200cf071984642de38e" containerID="92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" exitCode=137 Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.058440 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.058580 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170217 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170309 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170448 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170487 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170552 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") pod \"7f47300841026200cf071984642de38e\" (UID: \"7f47300841026200cf071984642de38e\") " Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170629 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log" (OuterVolumeSpecName: "var-log") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170679 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170706 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock" (OuterVolumeSpecName: "var-lock") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170749 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests" (OuterVolumeSpecName: "manifests") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170949 4183 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-manifests\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170975 4183 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-log\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.170991 4183 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.171005 4183 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-var-lock\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.181996 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "7f47300841026200cf071984642de38e" (UID: "7f47300841026200cf071984642de38e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.218138 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f47300841026200cf071984642de38e" path="/var/lib/kubelet/pods/7f47300841026200cf071984642de38e/volumes" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.218546 4183 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.272738 4183 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f47300841026200cf071984642de38e-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.289033 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.289098 4183 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0724fd71-838e-4f2e-b139-bb1fd482d17e" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.293089 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.293166 4183 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0724fd71-838e-4f2e-b139-bb1fd482d17e" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.984729 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7f47300841026200cf071984642de38e/startup-monitor/0.log" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.984982 4183 scope.go:117] "RemoveContainer" containerID="92928a395bcb4b479dc083922bbe86ac38b51d98cd589eedcbc4c18744b69d89" Aug 13 20:09:27 crc kubenswrapper[4183]: I0813 20:09:27.985206 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Aug 13 20:09:34 crc kubenswrapper[4183]: I0813 20:09:34.861454 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Aug 13 20:09:42 crc kubenswrapper[4183]: I0813 20:09:42.336888 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.750946 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751742 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751858 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751927 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:09:54 crc kubenswrapper[4183]: I0813 20:09:54.751981 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:09:55 crc kubenswrapper[4183]: I0813 20:09:55.597745 4183 scope.go:117] "RemoveContainer" containerID="dc3b34e8b871f3bd864f0c456c6ee0a0f7a97f171f4c0c5d20a5a451b26196e9" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.277768 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.278765 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" podNamespace="openshift-multus" podName="cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: E0813 20:10:15.279955 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.279984 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: E0813 20:10:15.280009 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280021 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280316 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" containerName="installer" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.280345 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f47300841026200cf071984642de38e" containerName="startup-monitor" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.283142 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.289029 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.289532 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-smth4" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.378578 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.379062 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.379570 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.380575 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.481719 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.481975 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482381 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482417 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.482748 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.483053 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.483370 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.525627 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"cni-sysctl-allowlist-ds-jx5m8\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:15 crc kubenswrapper[4183]: I0813 20:10:15.609972 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.323726 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerStarted","Data":"e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646"} Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.323769 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerStarted","Data":"7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2"} Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.324092 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:16 crc kubenswrapper[4183]: I0813 20:10:16.363837 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podStartSLOduration=1.363730948 podStartE2EDuration="1.363730948s" podCreationTimestamp="2025-08-13 20:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:10:16.360329401 +0000 UTC m=+1583.052994299" watchObservedRunningTime="2025-08-13 20:10:16.363730948 +0000 UTC m=+1583.056395666" Aug 13 20:10:17 crc kubenswrapper[4183]: I0813 20:10:17.407369 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:18 crc kubenswrapper[4183]: I0813 20:10:18.241296 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:19 crc kubenswrapper[4183]: I0813 20:10:19.343356 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" gracePeriod=30 Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.615052 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.619515 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.621844 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:25 crc kubenswrapper[4183]: E0813 20:10:25.621965 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.614950 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.617609 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.621472 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:35 crc kubenswrapper[4183]: E0813 20:10:35.621559 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.618009 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.623908 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.626362 4183 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" cmd=["/bin/bash","-c","test -f /ready/ready"] Aug 13 20:10:45 crc kubenswrapper[4183]: E0813 20:10:45.626486 4183 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.550765 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jx5m8_b78e72e3-8ece-4d66-aa9c-25445bacdc99/kube-multus-additional-cni-plugins/0.log" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.550945 4183 generic.go:334] "Generic (PLEG): container finished" podID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" exitCode=137 Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551009 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerDied","Data":"e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646"} Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551044 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" event={"ID":"b78e72e3-8ece-4d66-aa9c-25445bacdc99","Type":"ContainerDied","Data":"7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2"} Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.551075 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f3fc61d9433e4a7d56e81573eb626edd2106764ab8b801202688d1a24986dc2" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.584207 4183 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jx5m8_b78e72e3-8ece-4d66-aa9c-25445bacdc99/kube-multus-additional-cni-plugins/0.log" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.584448 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.706635 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.706906 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707146 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707314 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708152 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready" (OuterVolumeSpecName: "ready") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708195 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.707465 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") pod \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\" (UID: \"b78e72e3-8ece-4d66-aa9c-25445bacdc99\") " Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708648 4183 reconciler_common.go:300] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b78e72e3-8ece-4d66-aa9c-25445bacdc99-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708672 4183 reconciler_common.go:300] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b78e72e3-8ece-4d66-aa9c-25445bacdc99-ready\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.708683 4183 reconciler_common.go:300] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b78e72e3-8ece-4d66-aa9c-25445bacdc99-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.719169 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9" (OuterVolumeSpecName: "kube-api-access-25pz9") pod "b78e72e3-8ece-4d66-aa9c-25445bacdc99" (UID: "b78e72e3-8ece-4d66-aa9c-25445bacdc99"). InnerVolumeSpecName "kube-api-access-25pz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:10:49 crc kubenswrapper[4183]: I0813 20:10:49.810314 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-25pz9\" (UniqueName: \"kubernetes.io/projected/b78e72e3-8ece-4d66-aa9c-25445bacdc99-kube-api-access-25pz9\") on node \"crc\" DevicePath \"\"" Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.560008 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jx5m8" Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.605358 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:50 crc kubenswrapper[4183]: I0813 20:10:50.611870 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jx5m8"] Aug 13 20:10:51 crc kubenswrapper[4183]: I0813 20:10:51.217828 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" path="/var/lib/kubelet/pods/b78e72e3-8ece-4d66-aa9c-25445bacdc99/volumes" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.752861 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753521 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753599 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753657 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:10:54 crc kubenswrapper[4183]: I0813 20:10:54.753739 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.709489 4183 scope.go:117] "RemoveContainer" containerID="da6e49e577c89776d78e03c12b1aa711de8c3b6ceb252a9c05b51d38a6e6fd8a" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.758106 4183 scope.go:117] "RemoveContainer" containerID="5b04274f5ebeb54ec142f28db67158b3f20014bf0046505512a20f576eb7c4b4" Aug 13 20:10:55 crc kubenswrapper[4183]: I0813 20:10:55.792646 4183 scope.go:117] "RemoveContainer" containerID="daf74224d04a5859b6f3ea7213d84dd41f91a9dfefadc077c041aabcb8247fdd" Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.755707 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.756438 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" containerID="cri-o://764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" gracePeriod=30 Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.790837 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:10:59 crc kubenswrapper[4183]: I0813 20:10:59.791152 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" containerID="cri-o://3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" gracePeriod=30 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.353873 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.468116 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469581 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469685 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.469734 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.470165 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.470498 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") pod \"8b8d1c48-5762-450f-bd4d-9134869f432b\" (UID: \"8b8d1c48-5762-450f-bd4d-9134869f432b\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.473699 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.476019 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.478873 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config" (OuterVolumeSpecName: "config") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.487118 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.490218 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98" (OuterVolumeSpecName: "kube-api-access-spb98") pod "8b8d1c48-5762-450f-bd4d-9134869f432b" (UID: "8b8d1c48-5762-450f-bd4d-9134869f432b"). InnerVolumeSpecName "kube-api-access-spb98". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572528 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572630 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572681 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.572732 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") pod \"becc7e17-2bc7-417d-832f-55127299d70f\" (UID: \"becc7e17-2bc7-417d-832f-55127299d70f\") " Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573142 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573163 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8d1c48-5762-450f-bd4d-9134869f432b-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573175 4183 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573186 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8d1c48-5762-450f-bd4d-9134869f432b-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.573198 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-spb98\" (UniqueName: \"kubernetes.io/projected/8b8d1c48-5762-450f-bd4d-9134869f432b-kube-api-access-spb98\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.574269 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca" (OuterVolumeSpecName: "client-ca") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.574419 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config" (OuterVolumeSpecName: "config") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.578612 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr" (OuterVolumeSpecName: "kube-api-access-nvfwr") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "kube-api-access-nvfwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.579214 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "becc7e17-2bc7-417d-832f-55127299d70f" (UID: "becc7e17-2bc7-417d-832f-55127299d70f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631669 4183 generic.go:334] "Generic (PLEG): container finished" podID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" exitCode=0 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerDied","Data":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631841 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.631874 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fc85fd4-8wlsm" event={"ID":"8b8d1c48-5762-450f-bd4d-9134869f432b","Type":"ContainerDied","Data":"7814bf45dce77ed8a8c744f06e62839eae09ee6a9538e182ca2f0ea610392efb"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.632014 4183 scope.go:117] "RemoveContainer" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639087 4183 generic.go:334] "Generic (PLEG): container finished" podID="becc7e17-2bc7-417d-832f-55127299d70f" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" exitCode=0 Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639175 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerDied","Data":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639256 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" event={"ID":"becc7e17-2bc7-417d-832f-55127299d70f","Type":"ContainerDied","Data":"924f68f94ccf00f51d9670a79dea4855d290329c9234e55ec074960babbce6d7"} Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.639536 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674046 4183 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-client-ca\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674428 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nvfwr\" (UniqueName: \"kubernetes.io/projected/becc7e17-2bc7-417d-832f-55127299d70f-kube-api-access-nvfwr\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674522 4183 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/becc7e17-2bc7-417d-832f-55127299d70f-config\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.674622 4183 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/becc7e17-2bc7-417d-832f-55127299d70f-serving-cert\") on node \"crc\" DevicePath \"\"" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.718560 4183 scope.go:117] "RemoveContainer" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: E0813 20:11:00.719728 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": container with ID starting with 3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8 not found: ID does not exist" containerID="3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.720139 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8"} err="failed to get container status \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": rpc error: code = NotFound desc = could not find container \"3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8\": container with ID starting with 3a7af3bd6c985bd2cf1c0ebb554af4bd79e961a7f0b299ecb95e5c8f07b051d8 not found: ID does not exist" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.720430 4183 scope.go:117] "RemoveContainer" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.775971 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.787427 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-598fc85fd4-8wlsm"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.790274 4183 scope.go:117] "RemoveContainer" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: E0813 20:11:00.793167 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": container with ID starting with 764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75 not found: ID does not exist" containerID="764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.793238 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75"} err="failed to get container status \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": rpc error: code = NotFound desc = could not find container \"764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75\": container with ID starting with 764b4421d338c0c0f1baf8c5cf39f8312e1a50dc3eb5f025196bf23f93fcbe75 not found: ID does not exist" Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.822961 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:11:00 crc kubenswrapper[4183]: I0813 20:11:00.846342 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6884dcf749-n4qpx"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.219888 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" path="/var/lib/kubelet/pods/8b8d1c48-5762-450f-bd4d-9134869f432b/volumes" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.220771 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becc7e17-2bc7-417d-832f-55127299d70f" path="/var/lib/kubelet/pods/becc7e17-2bc7-417d-832f-55127299d70f/volumes" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.529530 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.529740 4183 topology_manager.go:215] "Topology Admit Handler" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" podNamespace="openshift-controller-manager" podName="controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530159 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530179 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530191 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530199 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: E0813 20:11:01.530215 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530222 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530383 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8d1c48-5762-450f-bd4d-9134869f432b" containerName="controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530400 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="becc7e17-2bc7-417d-832f-55127299d70f" containerName="route-controller-manager" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530411 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78e72e3-8ece-4d66-aa9c-25445bacdc99" containerName="kube-multus-additional-cni-plugins" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.530999 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535306 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535403 4183 topology_manager.go:215] "Topology Admit Handler" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.535706 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.536177 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.545713 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546083 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546286 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546479 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546608 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.546723 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.548592 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.550836 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.553742 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.554245 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.554485 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.555215 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.572420 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.600311 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688129 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688249 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688301 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688335 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688738 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.688877 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689031 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689097 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.689156 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.790450 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.792008 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.790906 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793212 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793305 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793338 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793351 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793433 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793497 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793556 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.793591 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795037 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795161 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.795292 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.806724 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.817740 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.832039 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.834455 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.860524 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:01 crc kubenswrapper[4183]: I0813 20:11:01.888227 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.196323 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.292702 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Aug 13 20:11:02 crc kubenswrapper[4183]: W0813 20:11:02.303249 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d29937_debd_4407_b2b1_d1053cb0f342.slice/crio-c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88 WatchSource:0}: Error finding container c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88: Status 404 returned error can't find the container with id c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88 Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.667677 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.668407 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670753 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"c5bff19800c2cb507bcf9fddcebd0a76d4998afb979fbc87c373bf9ec3c52c88"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670864 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.670889 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"67a3c779a8c87e71b43d6cb834c45eddf91ef0c21c030e8ec0df8e8304073b3c"} Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671078 4183 patch_prober.go:28] interesting pod/route-controller-manager-776b8b7477-sfpvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671181 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.671541 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.673582 4183 patch_prober.go:28] interesting pod/controller-manager-778975cc4f-x5vcf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.673645 4183 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.701285 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podStartSLOduration=3.701183908 podStartE2EDuration="3.701183908s" podCreationTimestamp="2025-08-13 20:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:11:02.699009676 +0000 UTC m=+1629.391674674" watchObservedRunningTime="2025-08-13 20:11:02.701183908 +0000 UTC m=+1629.393848866" Aug 13 20:11:02 crc kubenswrapper[4183]: I0813 20:11:02.740758 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podStartSLOduration=3.740696931 podStartE2EDuration="3.740696931s" podCreationTimestamp="2025-08-13 20:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:11:02.739829186 +0000 UTC m=+1629.432494084" watchObservedRunningTime="2025-08-13 20:11:02.740696931 +0000 UTC m=+1629.433361929" Aug 13 20:11:03 crc kubenswrapper[4183]: I0813 20:11:03.682819 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Aug 13 20:11:03 crc kubenswrapper[4183]: I0813 20:11:03.689194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.755271 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.755913 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756028 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756079 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:11:54 crc kubenswrapper[4183]: I0813 20:11:54.756124 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.757243 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758015 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758059 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758090 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:12:54 crc kubenswrapper[4183]: I0813 20:12:54.758135 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:12:55 crc kubenswrapper[4183]: I0813 20:12:55.917583 4183 scope.go:117] "RemoveContainer" containerID="be1e0c86831f89f585cd2c81563266389f6b99fe3a2b00e25563c193b7ae2289" Aug 13 20:12:55 crc kubenswrapper[4183]: I0813 20:12:55.959001 4183 scope.go:117] "RemoveContainer" containerID="6fac670aec99a6e895db54957107db545029859582d9e7bfff8bcb8b8323317b" Aug 13 20:12:56 crc kubenswrapper[4183]: I0813 20:12:56.001663 4183 scope.go:117] "RemoveContainer" containerID="4159ba877f8ff7e1e08f72bf3d12699149238f2597dfea0b4882ee6797fe2c98" Aug 13 20:12:56 crc kubenswrapper[4183]: I0813 20:12:56.041888 4183 scope.go:117] "RemoveContainer" containerID="844a16e08b8b6f6647fb07d6bae6657e732727da7ada45f1211b70ff85887202" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.759301 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760034 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760078 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:13:54 crc kubenswrapper[4183]: I0813 20:13:54.760150 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.760866 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761674 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761741 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761815 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:14:54 crc kubenswrapper[4183]: I0813 20:14:54.761868 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.374435 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.374945 4183 topology_manager.go:215] "Topology Admit Handler" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.375673 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.378592 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.379408 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.416621 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.471537 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.472052 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.472270 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.573741 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.574275 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.574554 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.576120 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.585446 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.598138 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"collect-profiles-29251935-d7x6j\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:00 crc kubenswrapper[4183]: I0813 20:15:00.699457 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:01 crc kubenswrapper[4183]: I0813 20:15:01.025171 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Aug 13 20:15:01 crc kubenswrapper[4183]: I0813 20:15:01.315680 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerStarted","Data":"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855"} Aug 13 20:15:02 crc kubenswrapper[4183]: I0813 20:15:02.324076 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerStarted","Data":"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373"} Aug 13 20:15:02 crc kubenswrapper[4183]: I0813 20:15:02.375455 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" podStartSLOduration=2.375358886 podStartE2EDuration="2.375358886s" podCreationTimestamp="2025-08-13 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:15:02.373158483 +0000 UTC m=+1869.065823261" watchObservedRunningTime="2025-08-13 20:15:02.375358886 +0000 UTC m=+1869.068023744" Aug 13 20:15:03 crc kubenswrapper[4183]: I0813 20:15:03.334093 4183 generic.go:334] "Generic (PLEG): container finished" podID="51936587-a4af-470d-ad92-8ab9062cbc72" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" exitCode=0 Aug 13 20:15:03 crc kubenswrapper[4183]: I0813 20:15:03.334182 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerDied","Data":"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373"} Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.645413 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728715 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728881 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.728956 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") pod \"51936587-a4af-470d-ad92-8ab9062cbc72\" (UID: \"51936587-a4af-470d-ad92-8ab9062cbc72\") " Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.730207 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume" (OuterVolumeSpecName: "config-volume") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.741647 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.756593 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7" (OuterVolumeSpecName: "kube-api-access-wf6f7") pod "51936587-a4af-470d-ad92-8ab9062cbc72" (UID: "51936587-a4af-470d-ad92-8ab9062cbc72"). InnerVolumeSpecName "kube-api-access-wf6f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830174 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wf6f7\" (UniqueName: \"kubernetes.io/projected/51936587-a4af-470d-ad92-8ab9062cbc72-kube-api-access-wf6f7\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830264 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51936587-a4af-470d-ad92-8ab9062cbc72-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:04 crc kubenswrapper[4183]: I0813 20:15:04.830278 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51936587-a4af-470d-ad92-8ab9062cbc72-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347352 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" event={"ID":"51936587-a4af-470d-ad92-8ab9062cbc72","Type":"ContainerDied","Data":"21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855"} Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347776 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21feea149913711f5f5cb056c6f29099adea6ffae9788ce014d1175df1602855" Aug 13 20:15:05 crc kubenswrapper[4183]: I0813 20:15:05.347539 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.762499 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763520 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763609 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763646 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:15:54 crc kubenswrapper[4183]: I0813 20:15:54.763691 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.765066 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766207 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766249 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766277 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:16:54 crc kubenswrapper[4183]: I0813 20:16:54.766315 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:16:56 crc kubenswrapper[4183]: I0813 20:16:56.146559 4183 scope.go:117] "RemoveContainer" containerID="e8b2e7f930d500cf3c7f8ae13874b47c586ff96efdacd975bab28dc614898646" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.193441 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194055 4183 topology_manager.go:215] "Topology Admit Handler" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" podNamespace="openshift-marketplace" podName="certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: E0813 20:16:58.194328 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194342 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.194512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" containerName="collect-profiles" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.195638 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.259855 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389343 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389447 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.389506 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.490922 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.491109 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.491155 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.492075 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.492098 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.518036 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"certified-operators-8bbjz\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.521542 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:16:58 crc kubenswrapper[4183]: I0813 20:16:58.870097 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:16:58 crc kubenswrapper[4183]: W0813 20:16:58.874840 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e241cc6_c71d_4fa0_9a1a_18098bcf6594.slice/crio-18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f WatchSource:0}: Error finding container 18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f: Status 404 returned error can't find the container with id 18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f Aug 13 20:16:59 crc kubenswrapper[4183]: I0813 20:16:59.093491 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f"} Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.103133 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537" exitCode=0 Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.103218 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537"} Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.113335 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.181024 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.181189 4183 topology_manager.go:215] "Topology Admit Handler" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" podNamespace="openshift-marketplace" podName="redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.185407 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.265288 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319177 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319326 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.319369 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421284 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421378 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.421424 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.422439 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.422862 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.462297 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"redhat-marketplace-nsk78\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:00 crc kubenswrapper[4183]: I0813 20:17:00.507167 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:01 crc kubenswrapper[4183]: I0813 20:17:01.049659 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:01 crc kubenswrapper[4183]: W0813 20:17:01.065223 4183 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda084eaff_10e9_439e_96f3_f3450fb14db7.slice/crio-95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439 WatchSource:0}: Error finding container 95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439: Status 404 returned error can't find the container with id 95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439 Aug 13 20:17:01 crc kubenswrapper[4183]: I0813 20:17:01.134559 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439"} Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.145903 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95"} Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.151179 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a" exitCode=0 Aug 13 20:17:02 crc kubenswrapper[4183]: I0813 20:17:02.151240 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a"} Aug 13 20:17:03 crc kubenswrapper[4183]: I0813 20:17:03.161241 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be"} Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.048838 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.049503 4183 topology_manager.go:215] "Topology Admit Handler" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" podNamespace="openshift-marketplace" podName="redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.050910 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.077652 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.078043 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.078266 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.179865 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.179991 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.180911 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.181460 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.181579 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.247450 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95" exitCode=0 Aug 13 20:17:16 crc kubenswrapper[4183]: I0813 20:17:16.247534 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95"} Aug 13 20:17:18 crc kubenswrapper[4183]: I0813 20:17:18.501218 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:19 crc kubenswrapper[4183]: I0813 20:17:19.268059 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerStarted","Data":"f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57"} Aug 13 20:17:20 crc kubenswrapper[4183]: I0813 20:17:20.726525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"redhat-operators-swl5s\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:20 crc kubenswrapper[4183]: I0813 20:17:20.882632 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:17:21 crc kubenswrapper[4183]: I0813 20:17:21.156903 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8bbjz" podStartSLOduration=6.689642693 podStartE2EDuration="23.156848646s" podCreationTimestamp="2025-08-13 20:16:58 +0000 UTC" firstStartedPulling="2025-08-13 20:17:00.105515813 +0000 UTC m=+1986.798180411" lastFinishedPulling="2025-08-13 20:17:16.572721666 +0000 UTC m=+2003.265386364" observedRunningTime="2025-08-13 20:17:21.14682776 +0000 UTC m=+2007.839492668" watchObservedRunningTime="2025-08-13 20:17:21.156848646 +0000 UTC m=+2007.849513524" Aug 13 20:17:21 crc kubenswrapper[4183]: I0813 20:17:21.601317 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.294948 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5"} Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.298131 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be" exitCode=0 Aug 13 20:17:22 crc kubenswrapper[4183]: I0813 20:17:22.298174 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be"} Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.318734 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661" exitCode=0 Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.319078 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661"} Aug 13 20:17:24 crc kubenswrapper[4183]: I0813 20:17:24.328164 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerStarted","Data":"e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee"} Aug 13 20:17:25 crc kubenswrapper[4183]: I0813 20:17:25.786058 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nsk78" podStartSLOduration=5.345693387 podStartE2EDuration="25.786006691s" podCreationTimestamp="2025-08-13 20:17:00 +0000 UTC" firstStartedPulling="2025-08-13 20:17:02.153570299 +0000 UTC m=+1988.846235017" lastFinishedPulling="2025-08-13 20:17:22.593883603 +0000 UTC m=+2009.286548321" observedRunningTime="2025-08-13 20:17:25.781553214 +0000 UTC m=+2012.474217902" watchObservedRunningTime="2025-08-13 20:17:25.786006691 +0000 UTC m=+2012.478671639" Aug 13 20:17:26 crc kubenswrapper[4183]: I0813 20:17:26.348657 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783"} Aug 13 20:17:28 crc kubenswrapper[4183]: I0813 20:17:28.522411 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:28 crc kubenswrapper[4183]: I0813 20:17:28.522533 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:29 crc kubenswrapper[4183]: I0813 20:17:29.752257 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8bbjz" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" probeResult="failure" output=< Aug 13 20:17:29 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:17:29 crc kubenswrapper[4183]: > Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.356548 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.357267 4183 topology_manager.go:215] "Topology Admit Handler" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" podNamespace="openshift-marketplace" podName="community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.359125 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397519 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397720 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.397941 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.465031 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500478 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.500571 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.501318 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.501491 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.508324 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.508371 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.580356 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"community-operators-tfv59\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.687703 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:30 crc kubenswrapper[4183]: I0813 20:17:30.690202 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.157708 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.386560 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790"} Aug 13 20:17:31 crc kubenswrapper[4183]: I0813 20:17:31.552454 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:32 crc kubenswrapper[4183]: I0813 20:17:32.398376 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395" exitCode=0 Aug 13 20:17:32 crc kubenswrapper[4183]: I0813 20:17:32.400080 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395"} Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.148460 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.149759 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nsk78" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" containerID="cri-o://e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" gracePeriod=2 Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.430402 4183 generic.go:334] "Generic (PLEG): container finished" podID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerID="e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" exitCode=0 Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.430608 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee"} Aug 13 20:17:34 crc kubenswrapper[4183]: I0813 20:17:34.436848 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906"} Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.735554 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779065 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779167 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.779255 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") pod \"a084eaff-10e9-439e-96f3-f3450fb14db7\" (UID: \"a084eaff-10e9-439e-96f3-f3450fb14db7\") " Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.780384 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities" (OuterVolumeSpecName: "utilities") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.790133 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg" (OuterVolumeSpecName: "kube-api-access-sjvpg") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "kube-api-access-sjvpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.880210 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sjvpg\" (UniqueName: \"kubernetes.io/projected/a084eaff-10e9-439e-96f3-f3450fb14db7-kube-api-access-sjvpg\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.880249 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.912682 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a084eaff-10e9-439e-96f3-f3450fb14db7" (UID: "a084eaff-10e9-439e-96f3-f3450fb14db7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:35 crc kubenswrapper[4183]: I0813 20:17:35.981512 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a084eaff-10e9-439e-96f3-f3450fb14db7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451597 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsk78" event={"ID":"a084eaff-10e9-439e-96f3-f3450fb14db7","Type":"ContainerDied","Data":"95f40ae6abffb8f7f44a2ff2ed8cce3117476e86756bb59fef9e083f90e1c439"} Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451670 4183 scope.go:117] "RemoveContainer" containerID="e7f09b6d9d86854fd3cc30b6c65331b20aae92eab9c6d03b65f319607fa59aee" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.451886 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsk78" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.507084 4183 scope.go:117] "RemoveContainer" containerID="c83a6ceb92ddb0c1bf7184148f9ba8f188093d3e9de859e304c76ea54c5ea5be" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.558206 4183 scope.go:117] "RemoveContainer" containerID="53f81688e5fd104f842edd52471938f4845344eecb7146cd6a01389e1136528a" Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.856002 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:36 crc kubenswrapper[4183]: I0813 20:17:36.946699 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsk78"] Aug 13 20:17:37 crc kubenswrapper[4183]: I0813 20:17:37.233945 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" path="/var/lib/kubelet/pods/a084eaff-10e9-439e-96f3-f3450fb14db7/volumes" Aug 13 20:17:38 crc kubenswrapper[4183]: I0813 20:17:38.703123 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:38 crc kubenswrapper[4183]: I0813 20:17:38.841230 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:39 crc kubenswrapper[4183]: I0813 20:17:39.170438 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:40 crc kubenswrapper[4183]: I0813 20:17:40.478207 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8bbjz" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" containerID="cri-o://f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" gracePeriod=2 Aug 13 20:17:42 crc kubenswrapper[4183]: I0813 20:17:42.497339 4183 generic.go:334] "Generic (PLEG): container finished" podID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerID="f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" exitCode=0 Aug 13 20:17:42 crc kubenswrapper[4183]: I0813 20:17:42.497393 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57"} Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.186627 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.285473 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.286067 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.286932 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities" (OuterVolumeSpecName: "utilities") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.287345 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") pod \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\" (UID: \"8e241cc6-c71d-4fa0-9a1a-18098bcf6594\") " Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.289686 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.294325 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw" (OuterVolumeSpecName: "kube-api-access-c56vw") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "kube-api-access-c56vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.392494 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c56vw\" (UniqueName: \"kubernetes.io/projected/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-kube-api-access-c56vw\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511412 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8bbjz" event={"ID":"8e241cc6-c71d-4fa0-9a1a-18098bcf6594","Type":"ContainerDied","Data":"18af4daca70b211334d04e0a4c7f6070bf9ac31d48abf8bfcac2bc9afc14c07f"} Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511496 4183 scope.go:117] "RemoveContainer" containerID="f31945f91f4930b964bb19c200a97bbe2d2d546d46ca69ecc3087aeaff8c4d57" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.511652 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8bbjz" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.556128 4183 scope.go:117] "RemoveContainer" containerID="81e7ca605fef6f0437d478dbda9f87bc7944dc329f70a81183a2e1f06c2bae95" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.582229 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e241cc6-c71d-4fa0-9a1a-18098bcf6594" (UID: "8e241cc6-c71d-4fa0-9a1a-18098bcf6594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.602192 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e241cc6-c71d-4fa0-9a1a-18098bcf6594-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:17:43 crc kubenswrapper[4183]: I0813 20:17:43.645938 4183 scope.go:117] "RemoveContainer" containerID="a859c58e4fdfbde98f0fc6b6dd5b6b351283c9a369a0cf1ca5981e6dffd1d537" Aug 13 20:17:45 crc kubenswrapper[4183]: I0813 20:17:45.247674 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:45 crc kubenswrapper[4183]: I0813 20:17:45.309950 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8bbjz"] Aug 13 20:17:47 crc kubenswrapper[4183]: I0813 20:17:47.219237 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" path="/var/lib/kubelet/pods/8e241cc6-c71d-4fa0-9a1a-18098bcf6594/volumes" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.767616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768291 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768440 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768565 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:17:54 crc kubenswrapper[4183]: I0813 20:17:54.768832 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:18:21 crc kubenswrapper[4183]: I0813 20:18:21.790031 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906" exitCode=0 Aug 13 20:18:21 crc kubenswrapper[4183]: I0813 20:18:21.790379 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906"} Aug 13 20:18:24 crc kubenswrapper[4183]: I0813 20:18:24.830046 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerStarted","Data":"9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610"} Aug 13 20:18:28 crc kubenswrapper[4183]: I0813 20:18:28.667179 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tfv59" podStartSLOduration=8.94479276 podStartE2EDuration="58.667068725s" podCreationTimestamp="2025-08-13 20:17:30 +0000 UTC" firstStartedPulling="2025-08-13 20:17:32.401991306 +0000 UTC m=+2019.094655904" lastFinishedPulling="2025-08-13 20:18:22.124267171 +0000 UTC m=+2068.816931869" observedRunningTime="2025-08-13 20:18:28.658892431 +0000 UTC m=+2075.351557529" watchObservedRunningTime="2025-08-13 20:18:28.667068725 +0000 UTC m=+2075.359733513" Aug 13 20:18:30 crc kubenswrapper[4183]: I0813 20:18:30.691065 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:30 crc kubenswrapper[4183]: I0813 20:18:30.692101 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:31 crc kubenswrapper[4183]: I0813 20:18:31.812856 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:18:31 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:18:31 crc kubenswrapper[4183]: > Aug 13 20:18:42 crc kubenswrapper[4183]: I0813 20:18:42.212915 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" probeResult="failure" output=< Aug 13 20:18:42 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:18:42 crc kubenswrapper[4183]: > Aug 13 20:18:50 crc kubenswrapper[4183]: I0813 20:18:50.817136 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:50 crc kubenswrapper[4183]: I0813 20:18:50.931347 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:51 crc kubenswrapper[4183]: I0813 20:18:51.204545 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:52 crc kubenswrapper[4183]: I0813 20:18:52.054359 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tfv59" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" containerID="cri-o://9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" gracePeriod=2 Aug 13 20:18:53 crc kubenswrapper[4183]: I0813 20:18:53.066555 4183 generic.go:334] "Generic (PLEG): container finished" podID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerID="9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" exitCode=0 Aug 13 20:18:53 crc kubenswrapper[4183]: I0813 20:18:53.066676 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610"} Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.355104 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503611 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503694 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.503871 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") pod \"718f06fe-dcad-4053-8de2-e2c38fb7503d\" (UID: \"718f06fe-dcad-4053-8de2-e2c38fb7503d\") " Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.505841 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities" (OuterVolumeSpecName: "utilities") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.511381 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh" (OuterVolumeSpecName: "kube-api-access-j46mh") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "kube-api-access-j46mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.605134 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.605191 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j46mh\" (UniqueName: \"kubernetes.io/projected/718f06fe-dcad-4053-8de2-e2c38fb7503d-kube-api-access-j46mh\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.772825 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773000 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773054 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773115 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:18:54 crc kubenswrapper[4183]: I0813 20:18:54.773176 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087090 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfv59" event={"ID":"718f06fe-dcad-4053-8de2-e2c38fb7503d","Type":"ContainerDied","Data":"b983de43e5866346d0dd68108cf11b5abe1a858b0917c8e56d9b8c75a270c790"} Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087180 4183 scope.go:117] "RemoveContainer" containerID="9d0d4f9896e6c60385c01fe90548d89f3dfa99fc0c2cc45dfb29054b3acd6610" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.087336 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfv59" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.132051 4183 scope.go:117] "RemoveContainer" containerID="fee1587aa425cb6125597c6af788ac5a06d44abb5df280875e0d2b1624a81906" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.155373 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "718f06fe-dcad-4053-8de2-e2c38fb7503d" (UID: "718f06fe-dcad-4053-8de2-e2c38fb7503d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.193463 4183 scope.go:117] "RemoveContainer" containerID="54a087bcecc2c6f5ffbb6af57b3c4e81ed60cca12c4ac0edb8fcbaed62dfc395" Aug 13 20:18:55 crc kubenswrapper[4183]: I0813 20:18:55.219316 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718f06fe-dcad-4053-8de2-e2c38fb7503d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:18:56 crc kubenswrapper[4183]: I0813 20:18:56.533634 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:56 crc kubenswrapper[4183]: I0813 20:18:56.585294 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tfv59"] Aug 13 20:18:57 crc kubenswrapper[4183]: I0813 20:18:57.218185 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" path="/var/lib/kubelet/pods/718f06fe-dcad-4053-8de2-e2c38fb7503d/volumes" Aug 13 20:18:59 crc kubenswrapper[4183]: I0813 20:18:59.120167 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783" exitCode=0 Aug 13 20:18:59 crc kubenswrapper[4183]: I0813 20:18:59.120258 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783"} Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.131839 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerStarted","Data":"6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca"} Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.224845 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-swl5s" podStartSLOduration=11.079722633 podStartE2EDuration="1m46.221985722s" podCreationTimestamp="2025-08-13 20:17:14 +0000 UTC" firstStartedPulling="2025-08-13 20:17:24.321737916 +0000 UTC m=+2011.014402594" lastFinishedPulling="2025-08-13 20:18:59.464001005 +0000 UTC m=+2106.156665683" observedRunningTime="2025-08-13 20:19:00.220231852 +0000 UTC m=+2106.912896660" watchObservedRunningTime="2025-08-13 20:19:00.221985722 +0000 UTC m=+2106.914651530" Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.883357 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:00 crc kubenswrapper[4183]: I0813 20:19:00.883456 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:01 crc kubenswrapper[4183]: I0813 20:19:01.993382 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:01 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:01 crc kubenswrapper[4183]: > Aug 13 20:19:12 crc kubenswrapper[4183]: I0813 20:19:12.039276 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:12 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:12 crc kubenswrapper[4183]: > Aug 13 20:19:21 crc kubenswrapper[4183]: I0813 20:19:21.985070 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" probeResult="failure" output=< Aug 13 20:19:21 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:19:21 crc kubenswrapper[4183]: > Aug 13 20:19:31 crc kubenswrapper[4183]: I0813 20:19:31.006405 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:31 crc kubenswrapper[4183]: I0813 20:19:31.122567 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.138114 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.138918 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-swl5s" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" containerID="cri-o://6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" gracePeriod=2 Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.397883 4183 generic.go:334] "Generic (PLEG): container finished" podID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerID="6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" exitCode=0 Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.397948 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca"} Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.611367 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735233 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735402 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.735463 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") pod \"407a8505-ab64-42f9-aa53-a63f8e97c189\" (UID: \"407a8505-ab64-42f9-aa53-a63f8e97c189\") " Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.736719 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities" (OuterVolumeSpecName: "utilities") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.742886 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n" (OuterVolumeSpecName: "kube-api-access-48x8n") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "kube-api-access-48x8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.839950 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-48x8n\" (UniqueName: \"kubernetes.io/projected/407a8505-ab64-42f9-aa53-a63f8e97c189-kube-api-access-48x8n\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:34 crc kubenswrapper[4183]: I0813 20:19:34.840044 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415040 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swl5s" event={"ID":"407a8505-ab64-42f9-aa53-a63f8e97c189","Type":"ContainerDied","Data":"011ddcc3b1f8c14a5a32c853b9c6c3e0b9cee09c368f2d8bc956c20b0cf4d5d5"} Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415089 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swl5s" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.415176 4183 scope.go:117] "RemoveContainer" containerID="6cccf520e993f65fe7f04eb2fcd6d00f74c6d2b2e0662a163738ba7ad2f433ca" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.479710 4183 scope.go:117] "RemoveContainer" containerID="064b3140f95afe7c02e4fbe1840b217c2cf8563c4df0d72177d57a941d039783" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.716961 4183 scope.go:117] "RemoveContainer" containerID="194af42a5001c99ae861a7524d09f26e2ac4df40b0aef4c0a94425791cba5661" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.736163 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "407a8505-ab64-42f9-aa53-a63f8e97c189" (UID: "407a8505-ab64-42f9-aa53-a63f8e97c189"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:19:35 crc kubenswrapper[4183]: I0813 20:19:35.764101 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407a8505-ab64-42f9-aa53-a63f8e97c189-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:19:38 crc kubenswrapper[4183]: I0813 20:19:38.358735 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:38 crc kubenswrapper[4183]: I0813 20:19:38.604074 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-swl5s"] Aug 13 20:19:39 crc kubenswrapper[4183]: I0813 20:19:39.217381 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" path="/var/lib/kubelet/pods/407a8505-ab64-42f9-aa53-a63f8e97c189/volumes" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.774766 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776105 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776210 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776267 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:19:54 crc kubenswrapper[4183]: I0813 20:19:54.776328 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.780947 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781628 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781725 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.781833 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:20:54 crc kubenswrapper[4183]: I0813 20:20:54.783726 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.784718 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785676 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785728 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.785858 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:21:54 crc kubenswrapper[4183]: I0813 20:21:54.786005 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.786811 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787500 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787549 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787580 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:22:54 crc kubenswrapper[4183]: I0813 20:22:54.787616 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.788392 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789243 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789302 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789353 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:23:54 crc kubenswrapper[4183]: I0813 20:23:54.789391 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.790268 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791164 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791235 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791272 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:24:54 crc kubenswrapper[4183]: I0813 20:24:54.791350 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.792447 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793238 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793278 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793314 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:25:54 crc kubenswrapper[4183]: I0813 20:25:54.793340 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.794075 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.794888 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795014 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795061 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:26:54 crc kubenswrapper[4183]: I0813 20:26:54.795093 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.681077 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.681897 4183 topology_manager.go:215] "Topology Admit Handler" podUID="b152b92f-8fab-4b74-8e68-00278380759d" podNamespace="openshift-marketplace" podName="redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684542 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684698 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684728 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684735 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684752 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684759 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684841 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684867 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684880 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684887 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684898 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684908 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684918 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684925 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684937 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684944 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-content" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684955 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684962 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.684975 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.684982 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="extract-utilities" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.685027 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685041 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: E0813 20:27:05.685052 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685059 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685448 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="407a8505-ab64-42f9-aa53-a63f8e97c189" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685487 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="718f06fe-dcad-4053-8de2-e2c38fb7503d" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685502 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a084eaff-10e9-439e-96f3-f3450fb14db7" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.685512 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e241cc6-c71d-4fa0-9a1a-18098bcf6594" containerName="registry-server" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.686679 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.725355 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734441 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734624 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.734953 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838250 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.836613 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838404 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.838438 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.839029 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.843107 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.843256 4183 topology_manager.go:215] "Topology Admit Handler" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" podNamespace="openshift-marketplace" podName="certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.847188 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.880146 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"redhat-marketplace-jbzn9\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.881068 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.941762 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.942067 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:05 crc kubenswrapper[4183]: I0813 20:27:05.942116 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.012530 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043376 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043470 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.043535 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.044525 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.045458 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.083111 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"certified-operators-xldzg\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.172146 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.522088 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.627904 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.815655 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e"} Aug 13 20:27:06 crc kubenswrapper[4183]: I0813 20:27:06.817284 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0"} Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.828702 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" exitCode=0 Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.828899 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc"} Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.833166 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.834515 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" exitCode=0 Aug 13 20:27:07 crc kubenswrapper[4183]: I0813 20:27:07.834677 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331"} Aug 13 20:27:08 crc kubenswrapper[4183]: I0813 20:27:08.846077 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} Aug 13 20:27:08 crc kubenswrapper[4183]: I0813 20:27:08.849557 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} Aug 13 20:27:15 crc kubenswrapper[4183]: I0813 20:27:15.932398 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" exitCode=0 Aug 13 20:27:15 crc kubenswrapper[4183]: I0813 20:27:15.932496 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} Aug 13 20:27:17 crc kubenswrapper[4183]: I0813 20:27:17.952187 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerStarted","Data":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.623429 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbzn9" podStartSLOduration=5.083855925 podStartE2EDuration="13.623333743s" podCreationTimestamp="2025-08-13 20:27:05 +0000 UTC" firstStartedPulling="2025-08-13 20:27:07.836421672 +0000 UTC m=+2594.529086440" lastFinishedPulling="2025-08-13 20:27:16.37589966 +0000 UTC m=+2603.068564258" observedRunningTime="2025-08-13 20:27:18.616155369 +0000 UTC m=+2605.308820377" watchObservedRunningTime="2025-08-13 20:27:18.623333743 +0000 UTC m=+2605.315998621" Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.966283 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" exitCode=0 Aug 13 20:27:18 crc kubenswrapper[4183]: I0813 20:27:18.966964 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} Aug 13 20:27:19 crc kubenswrapper[4183]: I0813 20:27:19.985472 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerStarted","Data":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} Aug 13 20:27:20 crc kubenswrapper[4183]: I0813 20:27:20.034729 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xldzg" podStartSLOduration=3.500986964 podStartE2EDuration="15.034677739s" podCreationTimestamp="2025-08-13 20:27:05 +0000 UTC" firstStartedPulling="2025-08-13 20:27:07.832168011 +0000 UTC m=+2594.524832719" lastFinishedPulling="2025-08-13 20:27:19.365858876 +0000 UTC m=+2606.058523494" observedRunningTime="2025-08-13 20:27:20.028528893 +0000 UTC m=+2606.721193801" watchObservedRunningTime="2025-08-13 20:27:20.034677739 +0000 UTC m=+2606.727342477" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.013496 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.015469 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.171177 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.173954 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.174409 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:26 crc kubenswrapper[4183]: I0813 20:27:26.312207 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.173669 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.174635 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.267615 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:27 crc kubenswrapper[4183]: I0813 20:27:27.431673 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.069858 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jbzn9" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" containerID="cri-o://7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" gracePeriod=2 Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.070204 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xldzg" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" containerID="cri-o://88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" gracePeriod=2 Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.551734 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.565636 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706074 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706587 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.706991 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707191 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") pod \"926ac7a4-e156-4e71-9681-7a48897402eb\" (UID: \"926ac7a4-e156-4e71-9681-7a48897402eb\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707319 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707465 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") pod \"b152b92f-8fab-4b74-8e68-00278380759d\" (UID: \"b152b92f-8fab-4b74-8e68-00278380759d\") " Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707537 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities" (OuterVolumeSpecName: "utilities") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.707757 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities" (OuterVolumeSpecName: "utilities") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.708134 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.708253 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.714867 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g" (OuterVolumeSpecName: "kube-api-access-tcz8g") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "kube-api-access-tcz8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.715290 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6" (OuterVolumeSpecName: "kube-api-access-sfrr6") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "kube-api-access-sfrr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.810096 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sfrr6\" (UniqueName: \"kubernetes.io/projected/b152b92f-8fab-4b74-8e68-00278380759d-kube-api-access-sfrr6\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.810149 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tcz8g\" (UniqueName: \"kubernetes.io/projected/926ac7a4-e156-4e71-9681-7a48897402eb-kube-api-access-tcz8g\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.846204 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b152b92f-8fab-4b74-8e68-00278380759d" (UID: "b152b92f-8fab-4b74-8e68-00278380759d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.911927 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b152b92f-8fab-4b74-8e68-00278380759d-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:29 crc kubenswrapper[4183]: I0813 20:27:29.944382 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "926ac7a4-e156-4e71-9681-7a48897402eb" (UID: "926ac7a4-e156-4e71-9681-7a48897402eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.013927 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926ac7a4-e156-4e71-9681-7a48897402eb-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078332 4183 generic.go:334] "Generic (PLEG): container finished" podID="b152b92f-8fab-4b74-8e68-00278380759d" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" exitCode=0 Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078431 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078464 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbzn9" event={"ID":"b152b92f-8fab-4b74-8e68-00278380759d","Type":"ContainerDied","Data":"65efa81c3e0e120daecf6c9164d2abac6df51a4e5e31a257f7b78c4d3d3d38c0"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078506 4183 scope.go:117] "RemoveContainer" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.078669 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbzn9" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087593 4183 generic.go:334] "Generic (PLEG): container finished" podID="926ac7a4-e156-4e71-9681-7a48897402eb" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" exitCode=0 Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087681 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.087736 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xldzg" event={"ID":"926ac7a4-e156-4e71-9681-7a48897402eb","Type":"ContainerDied","Data":"d26f242e575b9e444a733da3b77f8e6c54682a63650671af06353e001140925e"} Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.089151 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xldzg" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.155105 4183 scope.go:117] "RemoveContainer" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.230393 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.247602 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xldzg"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.259374 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.266146 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbzn9"] Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.278132 4183 scope.go:117] "RemoveContainer" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.324392 4183 scope.go:117] "RemoveContainer" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.326065 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": container with ID starting with 7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032 not found: ID does not exist" containerID="7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.326155 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032"} err="failed to get container status \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": rpc error: code = NotFound desc = could not find container \"7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032\": container with ID starting with 7e30ccf539b0e939f52dfb902c47e4cd395445da1765661c6d426b8ca964b032 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.326184 4183 scope.go:117] "RemoveContainer" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.327105 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": container with ID starting with ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286 not found: ID does not exist" containerID="ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.327149 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286"} err="failed to get container status \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": rpc error: code = NotFound desc = could not find container \"ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286\": container with ID starting with ed043c58aa1311cba339ea3a88a4451724c3ae23ee6961db5bf5da456cab8286 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.327166 4183 scope.go:117] "RemoveContainer" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.327955 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": container with ID starting with 2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331 not found: ID does not exist" containerID="2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.328062 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331"} err="failed to get container status \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": rpc error: code = NotFound desc = could not find container \"2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331\": container with ID starting with 2ce6380617c75b8aec8cca4873e4bbb6b91a72f626c193c7888f39c7509cf331 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.328084 4183 scope.go:117] "RemoveContainer" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.363618 4183 scope.go:117] "RemoveContainer" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.424486 4183 scope.go:117] "RemoveContainer" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.478357 4183 scope.go:117] "RemoveContainer" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.479580 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": container with ID starting with 88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418 not found: ID does not exist" containerID="88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.479858 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418"} err="failed to get container status \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": rpc error: code = NotFound desc = could not find container \"88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418\": container with ID starting with 88530f2c8d6983ea4b7f8a55a61c8904c48794b4f2d766641e0746619745d418 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.479883 4183 scope.go:117] "RemoveContainer" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.480605 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": container with ID starting with b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5 not found: ID does not exist" containerID="b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.480680 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5"} err="failed to get container status \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": rpc error: code = NotFound desc = could not find container \"b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5\": container with ID starting with b9f7f9231b80223fe3208938c84d0607b808bcfbd6509dd456db0623e8be59a5 not found: ID does not exist" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.480697 4183 scope.go:117] "RemoveContainer" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: E0813 20:27:30.481149 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": container with ID starting with de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc not found: ID does not exist" containerID="de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc" Aug 13 20:27:30 crc kubenswrapper[4183]: I0813 20:27:30.481210 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc"} err="failed to get container status \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": rpc error: code = NotFound desc = could not find container \"de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc\": container with ID starting with de56dabaa69b74ae1b421430568b061a335456078e93d11abdc0f8c2b32ea7bc not found: ID does not exist" Aug 13 20:27:31 crc kubenswrapper[4183]: I0813 20:27:31.218427 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" path="/var/lib/kubelet/pods/926ac7a4-e156-4e71-9681-7a48897402eb/volumes" Aug 13 20:27:31 crc kubenswrapper[4183]: I0813 20:27:31.219874 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b152b92f-8fab-4b74-8e68-00278380759d" path="/var/lib/kubelet/pods/b152b92f-8fab-4b74-8e68-00278380759d/volumes" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.796855 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797488 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797527 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797558 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:27:54 crc kubenswrapper[4183]: I0813 20:27:54.797597 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.324677 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325567 4183 topology_manager.go:215] "Topology Admit Handler" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" podNamespace="openshift-marketplace" podName="community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325926 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325946 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325959 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325966 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.325982 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.325989 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-content" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326029 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326047 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326063 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326072 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="extract-utilities" Aug 13 20:28:43 crc kubenswrapper[4183]: E0813 20:28:43.326125 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326136 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326308 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="926ac7a4-e156-4e71-9681-7a48897402eb" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.326322 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="b152b92f-8fab-4b74-8e68-00278380759d" containerName="registry-server" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.327661 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.360377 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.377401 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.377601 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.378243 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479200 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479349 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.479405 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.480311 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.480353 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.516418 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"community-operators-hvwvm\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:43 crc kubenswrapper[4183]: I0813 20:28:43.659547 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.064674 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.629205 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" exitCode=0 Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.630049 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef"} Aug 13 20:28:44 crc kubenswrapper[4183]: I0813 20:28:44.630922 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af"} Aug 13 20:28:45 crc kubenswrapper[4183]: I0813 20:28:45.657598 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.798527 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799512 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799589 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799642 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:28:54 crc kubenswrapper[4183]: I0813 20:28:54.799690 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:28:57 crc kubenswrapper[4183]: I0813 20:28:57.754900 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} Aug 13 20:28:57 crc kubenswrapper[4183]: I0813 20:28:57.754912 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" exitCode=0 Aug 13 20:28:59 crc kubenswrapper[4183]: I0813 20:28:59.779256 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerStarted","Data":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} Aug 13 20:28:59 crc kubenswrapper[4183]: I0813 20:28:59.823743 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvwvm" podStartSLOduration=3.175837032 podStartE2EDuration="16.823670146s" podCreationTimestamp="2025-08-13 20:28:43 +0000 UTC" firstStartedPulling="2025-08-13 20:28:44.639101497 +0000 UTC m=+2691.331766095" lastFinishedPulling="2025-08-13 20:28:58.286934521 +0000 UTC m=+2704.979599209" observedRunningTime="2025-08-13 20:28:59.820758222 +0000 UTC m=+2706.513422960" watchObservedRunningTime="2025-08-13 20:28:59.823670146 +0000 UTC m=+2706.516334874" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.660115 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.660963 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.780392 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.914752 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:03 crc kubenswrapper[4183]: I0813 20:29:03.990443 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:05 crc kubenswrapper[4183]: I0813 20:29:05.815902 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvwvm" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" containerID="cri-o://133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" gracePeriod=2 Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.270104 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.449566 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.450180 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.450371 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") pod \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\" (UID: \"bfb8fd54-a923-43fe-a0f5-bc4066352d71\") " Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.451196 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities" (OuterVolumeSpecName: "utilities") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.457914 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz" (OuterVolumeSpecName: "kube-api-access-j4wdz") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "kube-api-access-j4wdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.551885 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.551946 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j4wdz\" (UniqueName: \"kubernetes.io/projected/bfb8fd54-a923-43fe-a0f5-bc4066352d71-kube-api-access-j4wdz\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831648 4183 generic.go:334] "Generic (PLEG): container finished" podID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" exitCode=0 Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831920 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.831997 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvwvm" event={"ID":"bfb8fd54-a923-43fe-a0f5-bc4066352d71","Type":"ContainerDied","Data":"786926dc94686efd1a36edcba9d74a25c52ebbab0b0f4bffa09ccd0563fa89af"} Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.832103 4183 scope.go:117] "RemoveContainer" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.832179 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvwvm" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.886425 4183 scope.go:117] "RemoveContainer" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:06 crc kubenswrapper[4183]: I0813 20:29:06.958360 4183 scope.go:117] "RemoveContainer" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.001299 4183 scope.go:117] "RemoveContainer" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.002724 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": container with ID starting with 133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc not found: ID does not exist" containerID="133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.002860 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc"} err="failed to get container status \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": rpc error: code = NotFound desc = could not find container \"133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc\": container with ID starting with 133bc35819b92fc5eccabda1a227691250d617d9190cf935e0388ffd98cee7fc not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.002883 4183 scope.go:117] "RemoveContainer" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.003455 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": container with ID starting with e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519 not found: ID does not exist" containerID="e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.003521 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519"} err="failed to get container status \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": rpc error: code = NotFound desc = could not find container \"e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519\": container with ID starting with e680e963590fc9f5f15495fee59202e5d2c3d62df223d53f279ca67bdf1c2519 not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.003548 4183 scope.go:117] "RemoveContainer" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: E0813 20:29:07.004426 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": container with ID starting with e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef not found: ID does not exist" containerID="e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.004459 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef"} err="failed to get container status \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": rpc error: code = NotFound desc = could not find container \"e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef\": container with ID starting with e757bc97b0adc6d6cf0ccef8319788efea8208fee6dfe24ef865cc769848b1ef not found: ID does not exist" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.133046 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfb8fd54-a923-43fe-a0f5-bc4066352d71" (UID: "bfb8fd54-a923-43fe-a0f5-bc4066352d71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.159478 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb8fd54-a923-43fe-a0f5-bc4066352d71-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.474406 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:07 crc kubenswrapper[4183]: I0813 20:29:07.488264 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvwvm"] Aug 13 20:29:09 crc kubenswrapper[4183]: I0813 20:29:09.217193 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" path="/var/lib/kubelet/pods/bfb8fd54-a923-43fe-a0f5-bc4066352d71/volumes" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.105720 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106596 4183 topology_manager.go:215] "Topology Admit Handler" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" podNamespace="openshift-marketplace" podName="redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106870 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-utilities" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106886 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-utilities" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106898 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-content" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106906 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="extract-content" Aug 13 20:29:30 crc kubenswrapper[4183]: E0813 20:29:30.106923 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.106932 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.107125 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb8fd54-a923-43fe-a0f5-bc4066352d71" containerName="registry-server" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.115316 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.142749 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293194 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293265 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.293294 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.394671 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.395277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.395684 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.396060 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.396737 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.439308 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"redhat-operators-zdwjn\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.443745 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:29:30 crc kubenswrapper[4183]: I0813 20:29:30.797719 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:29:31 crc kubenswrapper[4183]: I0813 20:29:31.010510 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664"} Aug 13 20:29:32 crc kubenswrapper[4183]: I0813 20:29:32.020856 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" exitCode=0 Aug 13 20:29:32 crc kubenswrapper[4183]: I0813 20:29:32.021000 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa"} Aug 13 20:29:33 crc kubenswrapper[4183]: I0813 20:29:33.030834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.801138 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802303 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802388 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802449 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:29:54 crc kubenswrapper[4183]: I0813 20:29:54.802499 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.984271 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.985070 4183 topology_manager.go:215] "Topology Admit Handler" podUID="ad171c4b-8408-4370-8e86-502999788ddb" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251950-x8jjd" Aug 13 20:30:01 crc kubenswrapper[4183]: I0813 20:30:01.985900 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.008184 4183 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.008444 4183 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.036942 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.076386 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.076843 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.077488 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179277 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179382 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.179452 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.180707 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.190825 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.218103 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"collect-profiles-29251950-x8jjd\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.322129 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:02 crc kubenswrapper[4183]: I0813 20:30:02.812554 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.273725 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerStarted","Data":"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89"} Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.273834 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerStarted","Data":"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9"} Aug 13 20:30:03 crc kubenswrapper[4183]: I0813 20:30:03.327749 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" podStartSLOduration=2.327674238 podStartE2EDuration="2.327674238s" podCreationTimestamp="2025-08-13 20:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 20:30:03.323089886 +0000 UTC m=+2770.015754874" watchObservedRunningTime="2025-08-13 20:30:03.327674238 +0000 UTC m=+2770.020338866" Aug 13 20:30:05 crc kubenswrapper[4183]: I0813 20:30:05.290513 4183 generic.go:334] "Generic (PLEG): container finished" podID="ad171c4b-8408-4370-8e86-502999788ddb" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" exitCode=0 Aug 13 20:30:05 crc kubenswrapper[4183]: I0813 20:30:05.290622 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerDied","Data":"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89"} Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.889910 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.968429 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.969155 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.969974 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") pod \"ad171c4b-8408-4370-8e86-502999788ddb\" (UID: \"ad171c4b-8408-4370-8e86-502999788ddb\") " Aug 13 20:30:06 crc kubenswrapper[4183]: I0813 20:30:06.972559 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.000000 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.001682 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw" (OuterVolumeSpecName: "kube-api-access-pmlcw") pod "ad171c4b-8408-4370-8e86-502999788ddb" (UID: "ad171c4b-8408-4370-8e86-502999788ddb"). InnerVolumeSpecName "kube-api-access-pmlcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073397 4183 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad171c4b-8408-4370-8e86-502999788ddb-secret-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073542 4183 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad171c4b-8408-4370-8e86-502999788ddb-config-volume\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.073637 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pmlcw\" (UniqueName: \"kubernetes.io/projected/ad171c4b-8408-4370-8e86-502999788ddb-kube-api-access-pmlcw\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.307944 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.308046 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" event={"ID":"ad171c4b-8408-4370-8e86-502999788ddb","Type":"ContainerDied","Data":"61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9"} Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.309566 4183 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f39a784f23d0eb34c08ee8791af999ae86d8f1a778312f8732ee7ffb6e1ab9" Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.313402 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" exitCode=0 Aug 13 20:30:07 crc kubenswrapper[4183]: I0813 20:30:07.314010 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.188369 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.202397 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251905-zmjv9"] Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.323625 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerStarted","Data":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} Aug 13 20:30:08 crc kubenswrapper[4183]: I0813 20:30:08.376959 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zdwjn" podStartSLOduration=2.644749574 podStartE2EDuration="38.376906603s" podCreationTimestamp="2025-08-13 20:29:30 +0000 UTC" firstStartedPulling="2025-08-13 20:29:32.023072954 +0000 UTC m=+2738.715737712" lastFinishedPulling="2025-08-13 20:30:07.755230113 +0000 UTC m=+2774.447894741" observedRunningTime="2025-08-13 20:30:08.369449078 +0000 UTC m=+2775.062113856" watchObservedRunningTime="2025-08-13 20:30:08.376906603 +0000 UTC m=+2775.069571331" Aug 13 20:30:09 crc kubenswrapper[4183]: I0813 20:30:09.217942 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8500d7bd-50fb-4ca6-af41-b7a24cae43cd" path="/var/lib/kubelet/pods/8500d7bd-50fb-4ca6-af41-b7a24cae43cd/volumes" Aug 13 20:30:10 crc kubenswrapper[4183]: I0813 20:30:10.444935 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:10 crc kubenswrapper[4183]: I0813 20:30:10.445312 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:11 crc kubenswrapper[4183]: I0813 20:30:11.559391 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" probeResult="failure" output=< Aug 13 20:30:11 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:30:11 crc kubenswrapper[4183]: > Aug 13 20:30:21 crc kubenswrapper[4183]: I0813 20:30:21.571657 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" probeResult="failure" output=< Aug 13 20:30:21 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:30:21 crc kubenswrapper[4183]: > Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.639012 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.789286 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:30 crc kubenswrapper[4183]: I0813 20:30:30.862664 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.506496 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zdwjn" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" containerID="cri-o://7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" gracePeriod=2 Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.931506 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984564 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984743 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.984919 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") pod \"6d579e1a-3b27-4c1f-9175-42ac58490d42\" (UID: \"6d579e1a-3b27-4c1f-9175-42ac58490d42\") " Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.987281 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities" (OuterVolumeSpecName: "utilities") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:30:32 crc kubenswrapper[4183]: I0813 20:30:32.995193 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8" (OuterVolumeSpecName: "kube-api-access-r6rj8") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "kube-api-access-r6rj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.086897 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.087266 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r6rj8\" (UniqueName: \"kubernetes.io/projected/6d579e1a-3b27-4c1f-9175-42ac58490d42-kube-api-access-r6rj8\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521199 4183 generic.go:334] "Generic (PLEG): container finished" podID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" exitCode=0 Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521250 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521283 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdwjn" event={"ID":"6d579e1a-3b27-4c1f-9175-42ac58490d42","Type":"ContainerDied","Data":"3fdb2c96a67c0023e81d4e6bc3c617fe7dc7a69ecde6952807c647f2fadab664"} Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521312 4183 scope.go:117] "RemoveContainer" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.521409 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdwjn" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.589471 4183 scope.go:117] "RemoveContainer" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.818192 4183 scope.go:117] "RemoveContainer" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.892207 4183 scope.go:117] "RemoveContainer" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.897265 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": container with ID starting with 7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e not found: ID does not exist" containerID="7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.897391 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e"} err="failed to get container status \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": rpc error: code = NotFound desc = could not find container \"7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e\": container with ID starting with 7883102f1a9e3d1e5b1b2906ef9833009223f4efc5cfe9d327a5f7340ebd983e not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.897418 4183 scope.go:117] "RemoveContainer" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.898541 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": container with ID starting with dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29 not found: ID does not exist" containerID="dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.898707 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29"} err="failed to get container status \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": rpc error: code = NotFound desc = could not find container \"dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29\": container with ID starting with dd08aaf9d3c514accc3008f9ff4a36a680f73168eda1c4184a8cfeed0f324d29 not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.898943 4183 scope.go:117] "RemoveContainer" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: E0813 20:30:33.899705 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": container with ID starting with a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa not found: ID does not exist" containerID="a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.899762 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa"} err="failed to get container status \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": rpc error: code = NotFound desc = could not find container \"a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa\": container with ID starting with a54b9d1110572d22b3a369ea31bffa9fe51cea3f5e0f5eec8bf96489870607fa not found: ID does not exist" Aug 13 20:30:33 crc kubenswrapper[4183]: I0813 20:30:33.930635 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d579e1a-3b27-4c1f-9175-42ac58490d42" (UID: "6d579e1a-3b27-4c1f-9175-42ac58490d42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.008519 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d579e1a-3b27-4c1f-9175-42ac58490d42-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.175424 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:34 crc kubenswrapper[4183]: I0813 20:30:34.188387 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zdwjn"] Aug 13 20:30:35 crc kubenswrapper[4183]: I0813 20:30:35.217865 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" path="/var/lib/kubelet/pods/6d579e1a-3b27-4c1f-9175-42ac58490d42/volumes" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.803495 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804074 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804179 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804222 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:30:54 crc kubenswrapper[4183]: I0813 20:30:54.804256 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:30:56 crc kubenswrapper[4183]: I0813 20:30:56.527744 4183 scope.go:117] "RemoveContainer" containerID="a00abbf09803bc3f3a22a86887914ba2fa3026aff021086131cdf33906d7fb2c" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.805259 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806196 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806303 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806341 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:31:54 crc kubenswrapper[4183]: I0813 20:31:54.806378 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.807668 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808421 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808465 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808514 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:32:54 crc kubenswrapper[4183]: I0813 20:32:54.808615 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.809699 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810371 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810430 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810472 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:33:54 crc kubenswrapper[4183]: I0813 20:33:54.810521 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.810974 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.811990 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812054 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812164 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:34:54 crc kubenswrapper[4183]: I0813 20:34:54.812235 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.813302 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.813971 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814025 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814174 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:35:54 crc kubenswrapper[4183]: I0813 20:35:54.814227 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.815418 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816161 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816230 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816266 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:36:54 crc kubenswrapper[4183]: I0813 20:36:54.816304 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.226038 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227116 4183 topology_manager.go:215] "Topology Admit Handler" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" podNamespace="openshift-marketplace" podName="redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227465 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227489 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227519 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-utilities" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227529 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-utilities" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227576 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-content" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227589 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="extract-content" Aug 13 20:37:48 crc kubenswrapper[4183]: E0813 20:37:48.227600 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.227610 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.231919 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d579e1a-3b27-4c1f-9175-42ac58490d42" containerName="registry-server" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.231972 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad171c4b-8408-4370-8e86-502999788ddb" containerName="collect-profiles" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.233395 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.272736 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360000 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360188 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.360524 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.462502 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.463115 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.463353 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.464352 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.464448 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.493262 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"redhat-marketplace-nkzlk\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.563669 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:48 crc kubenswrapper[4183]: I0813 20:37:48.897981 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610098 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" exitCode=0 Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610182 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d"} Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.610530 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973"} Aug 13 20:37:49 crc kubenswrapper[4183]: I0813 20:37:49.614029 4183 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Aug 13 20:37:50 crc kubenswrapper[4183]: I0813 20:37:50.621086 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.659569 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" exitCode=0 Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.660074 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.816871 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.816963 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817010 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817053 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:37:54 crc kubenswrapper[4183]: I0813 20:37:54.817088 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:37:55 crc kubenswrapper[4183]: I0813 20:37:55.670764 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerStarted","Data":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.565755 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.566326 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.676409 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:37:58 crc kubenswrapper[4183]: I0813 20:37:58.705440 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nkzlk" podStartSLOduration=5.288354689 podStartE2EDuration="10.705385893s" podCreationTimestamp="2025-08-13 20:37:48 +0000 UTC" firstStartedPulling="2025-08-13 20:37:49.613412649 +0000 UTC m=+3236.306077307" lastFinishedPulling="2025-08-13 20:37:55.030443883 +0000 UTC m=+3241.723108511" observedRunningTime="2025-08-13 20:37:56.514890851 +0000 UTC m=+3243.207556409" watchObservedRunningTime="2025-08-13 20:37:58.705385893 +0000 UTC m=+3245.398050771" Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.683194 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.749777 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:08 crc kubenswrapper[4183]: I0813 20:38:08.764345 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nkzlk" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" containerID="cri-o://8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" gracePeriod=2 Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.176666 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.217983 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.218293 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.218355 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") pod \"afc02c17-9714-426d-aafa-ee58c673ab0c\" (UID: \"afc02c17-9714-426d-aafa-ee58c673ab0c\") " Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.219426 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities" (OuterVolumeSpecName: "utilities") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.226278 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9" (OuterVolumeSpecName: "kube-api-access-9gcn9") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "kube-api-access-9gcn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.320361 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.320929 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9gcn9\" (UniqueName: \"kubernetes.io/projected/afc02c17-9714-426d-aafa-ee58c673ab0c-kube-api-access-9gcn9\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.366919 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afc02c17-9714-426d-aafa-ee58c673ab0c" (UID: "afc02c17-9714-426d-aafa-ee58c673ab0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.422616 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc02c17-9714-426d-aafa-ee58c673ab0c-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776026 4183 generic.go:334] "Generic (PLEG): container finished" podID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" exitCode=0 Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776115 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkzlk" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.776194 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.777248 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkzlk" event={"ID":"afc02c17-9714-426d-aafa-ee58c673ab0c","Type":"ContainerDied","Data":"316cb50fa85ce6160eae66b0e77413969935d818294ab5165bd912abd5fc6973"} Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.777285 4183 scope.go:117] "RemoveContainer" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.829179 4183 scope.go:117] "RemoveContainer" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.866063 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.875230 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkzlk"] Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.883982 4183 scope.go:117] "RemoveContainer" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.922409 4183 scope.go:117] "RemoveContainer" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.923230 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": container with ID starting with 8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922 not found: ID does not exist" containerID="8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923304 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922"} err="failed to get container status \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": rpc error: code = NotFound desc = could not find container \"8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922\": container with ID starting with 8c0ce2e26a36b42bbbf4f6b8b7a9e7a3db2be497f2cd4408c8bf334f82611922 not found: ID does not exist" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923319 4183 scope.go:117] "RemoveContainer" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.923941 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": container with ID starting with 1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee not found: ID does not exist" containerID="1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923970 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee"} err="failed to get container status \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": rpc error: code = NotFound desc = could not find container \"1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee\": container with ID starting with 1f10cb491a363d12591b266e087b0fcbb708d3c04b98458a2baaa6c8740d55ee not found: ID does not exist" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.923981 4183 scope.go:117] "RemoveContainer" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: E0813 20:38:09.925057 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": container with ID starting with 380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d not found: ID does not exist" containerID="380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d" Aug 13 20:38:09 crc kubenswrapper[4183]: I0813 20:38:09.925250 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d"} err="failed to get container status \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": rpc error: code = NotFound desc = could not find container \"380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d\": container with ID starting with 380cb4808274ab30e2897e56a320084500d526076fc23555aa51c36d1995e57d not found: ID does not exist" Aug 13 20:38:11 crc kubenswrapper[4183]: I0813 20:38:11.217764 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" path="/var/lib/kubelet/pods/afc02c17-9714-426d-aafa-ee58c673ab0c/volumes" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.093544 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.096217 4183 topology_manager.go:215] "Topology Admit Handler" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" podNamespace="openshift-marketplace" podName="certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.096659 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.096835 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.104025 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-utilities" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104087 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-utilities" Aug 13 20:38:36 crc kubenswrapper[4183]: E0813 20:38:36.104122 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-content" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104129 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="extract-content" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.104443 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc02c17-9714-426d-aafa-ee58c673ab0c" containerName="registry-server" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.105518 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.143532 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.203570 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.203656 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.204094 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.305098 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.305560 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.306221 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.307045 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.307051 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.340674 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"certified-operators-4kmbv\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.431705 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.809750 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:36 crc kubenswrapper[4183]: I0813 20:38:36.985997 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0"} Aug 13 20:38:37 crc kubenswrapper[4183]: I0813 20:38:37.994454 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" exitCode=0 Aug 13 20:38:37 crc kubenswrapper[4183]: I0813 20:38:37.994525 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255"} Aug 13 20:38:39 crc kubenswrapper[4183]: I0813 20:38:39.004230 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} Aug 13 20:38:44 crc kubenswrapper[4183]: I0813 20:38:44.041088 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" exitCode=0 Aug 13 20:38:44 crc kubenswrapper[4183]: I0813 20:38:44.041438 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} Aug 13 20:38:45 crc kubenswrapper[4183]: I0813 20:38:45.050620 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerStarted","Data":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} Aug 13 20:38:45 crc kubenswrapper[4183]: I0813 20:38:45.084743 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4kmbv" podStartSLOduration=2.689452066 podStartE2EDuration="9.084667311s" podCreationTimestamp="2025-08-13 20:38:36 +0000 UTC" firstStartedPulling="2025-08-13 20:38:37.996633082 +0000 UTC m=+3284.689297820" lastFinishedPulling="2025-08-13 20:38:44.391848357 +0000 UTC m=+3291.084513065" observedRunningTime="2025-08-13 20:38:45.080307175 +0000 UTC m=+3291.772971963" watchObservedRunningTime="2025-08-13 20:38:45.084667311 +0000 UTC m=+3291.777332029" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.432635 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.433566 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:46 crc kubenswrapper[4183]: I0813 20:38:46.551433 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.817852 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818310 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818423 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818467 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:38:54 crc kubenswrapper[4183]: I0813 20:38:54.818514 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:38:56 crc kubenswrapper[4183]: I0813 20:38:56.564125 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:56 crc kubenswrapper[4183]: I0813 20:38:56.644422 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.141812 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4kmbv" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" containerID="cri-o://4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" gracePeriod=2 Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.533422 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617319 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617553 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.617652 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") pod \"847e60dc-7a0a-4115-a7e1-356476e319e7\" (UID: \"847e60dc-7a0a-4115-a7e1-356476e319e7\") " Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.618960 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities" (OuterVolumeSpecName: "utilities") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.628370 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7" (OuterVolumeSpecName: "kube-api-access-bqlp7") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "kube-api-access-bqlp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.719139 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.719228 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqlp7\" (UniqueName: \"kubernetes.io/projected/847e60dc-7a0a-4115-a7e1-356476e319e7-kube-api-access-bqlp7\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.842955 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "847e60dc-7a0a-4115-a7e1-356476e319e7" (UID: "847e60dc-7a0a-4115-a7e1-356476e319e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:38:57 crc kubenswrapper[4183]: I0813 20:38:57.921914 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/847e60dc-7a0a-4115-a7e1-356476e319e7-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151335 4183 generic.go:334] "Generic (PLEG): container finished" podID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" exitCode=0 Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151405 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151452 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kmbv" event={"ID":"847e60dc-7a0a-4115-a7e1-356476e319e7","Type":"ContainerDied","Data":"48a72e1ed96b8c0e5bbe9b3b5aff8ae2f439297ae80339ffcbf1bb7ef84d8de0"} Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151497 4183 scope.go:117] "RemoveContainer" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.151628 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kmbv" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.199060 4183 scope.go:117] "RemoveContainer" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.240373 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.246222 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4kmbv"] Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.267919 4183 scope.go:117] "RemoveContainer" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.320226 4183 scope.go:117] "RemoveContainer" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.321862 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": container with ID starting with 4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b not found: ID does not exist" containerID="4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.321944 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b"} err="failed to get container status \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": rpc error: code = NotFound desc = could not find container \"4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b\": container with ID starting with 4d4fa968ffeb0d6b6d897b7980c16b8302c2093e98fc6200cbfdce0392867e0b not found: ID does not exist" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.321968 4183 scope.go:117] "RemoveContainer" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.322957 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": container with ID starting with cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2 not found: ID does not exist" containerID="cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323051 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2"} err="failed to get container status \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": rpc error: code = NotFound desc = could not find container \"cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2\": container with ID starting with cca9e40ae74d8be31d8667f9679183397993730648da379af8845ec53dbc84b2 not found: ID does not exist" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323071 4183 scope.go:117] "RemoveContainer" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: E0813 20:38:58.323851 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": container with ID starting with f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255 not found: ID does not exist" containerID="f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255" Aug 13 20:38:58 crc kubenswrapper[4183]: I0813 20:38:58.323918 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255"} err="failed to get container status \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": rpc error: code = NotFound desc = could not find container \"f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255\": container with ID starting with f13decb9fdd30ef896ae57a0bb1e7c727d2f51bf23d21a0c06925e526cda0255 not found: ID does not exist" Aug 13 20:38:59 crc kubenswrapper[4183]: I0813 20:38:59.221999 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" path="/var/lib/kubelet/pods/847e60dc-7a0a-4115-a7e1-356476e319e7/volumes" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.819395 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820101 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820237 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820279 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:39:54 crc kubenswrapper[4183]: I0813 20:39:54.820312 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821089 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821872 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821940 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.821984 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:40:54 crc kubenswrapper[4183]: I0813 20:40:54.822014 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.457733 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458497 4183 topology_manager.go:215] "Topology Admit Handler" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" podNamespace="openshift-marketplace" podName="redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458870 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458891 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458911 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-content" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458919 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-content" Aug 13 20:41:21 crc kubenswrapper[4183]: E0813 20:41:21.458935 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-utilities" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.458943 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="extract-utilities" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.459099 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="847e60dc-7a0a-4115-a7e1-356476e319e7" containerName="registry-server" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.463161 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.560744 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638564 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638643 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.638712 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740072 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740153 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.740263 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.741100 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.741155 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.775996 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"redhat-operators-k2tgr\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:21 crc kubenswrapper[4183]: I0813 20:41:21.813097 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:22 crc kubenswrapper[4183]: I0813 20:41:22.212454 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.138668 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54"} Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.140092 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" exitCode=0 Aug 13 20:41:23 crc kubenswrapper[4183]: I0813 20:41:23.140278 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c"} Aug 13 20:41:24 crc kubenswrapper[4183]: I0813 20:41:24.153949 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} Aug 13 20:41:48 crc kubenswrapper[4183]: I0813 20:41:48.416680 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" exitCode=0 Aug 13 20:41:48 crc kubenswrapper[4183]: I0813 20:41:48.417522 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} Aug 13 20:41:50 crc kubenswrapper[4183]: I0813 20:41:50.435617 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerStarted","Data":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} Aug 13 20:41:51 crc kubenswrapper[4183]: I0813 20:41:51.814499 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:51 crc kubenswrapper[4183]: I0813 20:41:51.814605 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:41:52 crc kubenswrapper[4183]: I0813 20:41:52.942710 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" probeResult="failure" output=< Aug 13 20:41:52 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:41:52 crc kubenswrapper[4183]: > Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.822617 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823133 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823185 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823259 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Aug 13 20:41:54 crc kubenswrapper[4183]: I0813 20:41:54.823299 4183 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Aug 13 20:42:02 crc kubenswrapper[4183]: I0813 20:42:02.939416 4183 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" probeResult="failure" output=< Aug 13 20:42:02 crc kubenswrapper[4183]: timeout: failed to connect service ":50051" within 1s Aug 13 20:42:02 crc kubenswrapper[4183]: > Aug 13 20:42:11 crc kubenswrapper[4183]: I0813 20:42:11.984442 4183 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.028486 4183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2tgr" podStartSLOduration=25.310193169 podStartE2EDuration="51.028422928s" podCreationTimestamp="2025-08-13 20:41:21 +0000 UTC" firstStartedPulling="2025-08-13 20:41:23.140881353 +0000 UTC m=+3449.833546071" lastFinishedPulling="2025-08-13 20:41:48.859111222 +0000 UTC m=+3475.551775830" observedRunningTime="2025-08-13 20:41:50.480344302 +0000 UTC m=+3477.173009280" watchObservedRunningTime="2025-08-13 20:42:12.028422928 +0000 UTC m=+3498.721087656" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.100927 4183 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:12 crc kubenswrapper[4183]: I0813 20:42:12.176489 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.263240 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.587508 4183 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2tgr" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" containerID="cri-o://d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" gracePeriod=2 Aug 13 20:42:13 crc kubenswrapper[4183]: I0813 20:42:13.985208 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.243675 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329446 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329529 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.329562 4183 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") pod \"58e4f786-ee2a-45c4-83a4-523611d1eccd\" (UID: \"58e4f786-ee2a-45c4-83a4-523611d1eccd\") " Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.330725 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities" (OuterVolumeSpecName: "utilities") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.346140 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9" (OuterVolumeSpecName: "kube-api-access-shhm9") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "kube-api-access-shhm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.431373 4183 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-utilities\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.431440 4183 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-shhm9\" (UniqueName: \"kubernetes.io/projected/58e4f786-ee2a-45c4-83a4-523611d1eccd-kube-api-access-shhm9\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622657 4183 generic.go:334] "Generic (PLEG): container finished" podID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" exitCode=0 Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622712 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622765 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2tgr" event={"ID":"58e4f786-ee2a-45c4-83a4-523611d1eccd","Type":"ContainerDied","Data":"b07b3fcd02d69d1222fdf132ca426f7cb86cb788df356d30a6d271afcf12936c"} Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.622852 4183 scope.go:117] "RemoveContainer" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.623034 4183 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2tgr" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.791096 4183 scope.go:117] "RemoveContainer" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.903231 4183 scope.go:117] "RemoveContainer" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.973171 4183 scope.go:117] "RemoveContainer" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.974453 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": container with ID starting with d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0 not found: ID does not exist" containerID="d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.974568 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0"} err="failed to get container status \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": rpc error: code = NotFound desc = could not find container \"d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0\": container with ID starting with d71a08820a628e49a4944e224dac2a57c287993423476efa7e5926f4e7df03d0 not found: ID does not exist" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.974596 4183 scope.go:117] "RemoveContainer" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.975768 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": container with ID starting with 23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42 not found: ID does not exist" containerID="23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.976375 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42"} err="failed to get container status \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": rpc error: code = NotFound desc = could not find container \"23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42\": container with ID starting with 23cb6067105cb81e29b706a75511879876a39ff71faee76af4065685c8489b42 not found: ID does not exist" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.976404 4183 scope.go:117] "RemoveContainer" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: E0813 20:42:14.977560 4183 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": container with ID starting with 97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54 not found: ID does not exist" containerID="97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54" Aug 13 20:42:14 crc kubenswrapper[4183]: I0813 20:42:14.977600 4183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54"} err="failed to get container status \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": rpc error: code = NotFound desc = could not find container \"97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54\": container with ID starting with 97975b8478480bc243fd4dfc822e187789038bc9e4be6621b7b69c1f88b52b54 not found: ID does not exist" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.279549 4183 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58e4f786-ee2a-45c4-83a4-523611d1eccd" (UID: "58e4f786-ee2a-45c4-83a4-523611d1eccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.345759 4183 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58e4f786-ee2a-45c4-83a4-523611d1eccd-catalog-content\") on node \"crc\" DevicePath \"\"" Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.645911 4183 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:15 crc kubenswrapper[4183]: I0813 20:42:15.671541 4183 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2tgr"] Aug 13 20:42:16 crc kubenswrapper[4183]: I0813 20:42:16.591921 4183 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:17 crc kubenswrapper[4183]: I0813 20:42:17.218922 4183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" path="/var/lib/kubelet/pods/58e4f786-ee2a-45c4-83a4-523611d1eccd/volumes" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.022059 4183 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.022931 4183 topology_manager.go:215] "Topology Admit Handler" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" podNamespace="openshift-marketplace" podName="community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023252 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-content" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023293 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-content" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023313 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-utilities" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023325 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="extract-utilities" Aug 13 20:42:26 crc kubenswrapper[4183]: E0813 20:42:26.023345 4183 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023355 4183 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.023548 4183 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e4f786-ee2a-45c4-83a4-523611d1eccd" containerName="registry-server" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.033492 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.042188 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.209469 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.210951 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.211019 4183 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312196 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312307 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.312335 4183 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.313570 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.313883 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.356133 4183 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:26 crc kubenswrapper[4183]: I0813 20:42:26.889621 4183 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Aug 13 20:42:27 crc kubenswrapper[4183]: I0813 20:42:27.900601 4183 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Aug 13 20:42:28 crc kubenswrapper[4183]: I0813 20:42:28.727615 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"b4ce7c1e13297d1e3743efaf9f1064544bf90f65fb0b7a8fecd420a76ed2a73a"} Aug 13 20:42:31 crc kubenswrapper[4183]: I0813 20:42:31.758640 4183 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" exitCode=0 Aug 13 20:42:31 crc kubenswrapper[4183]: I0813 20:42:31.758743 4183 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f"} Aug 13 20:42:34 crc systemd[1]: Stopping Kubernetes Kubelet... Aug 13 20:42:34 crc kubenswrapper[4183]: I0813 20:42:34.901075 4183 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Aug 13 20:42:34 crc systemd[1]: kubelet.service: Deactivated successfully. Aug 13 20:42:34 crc systemd[1]: Stopped Kubernetes Kubelet. Aug 13 20:42:34 crc systemd[1]: kubelet.service: Consumed 9min 48.169s CPU time. -- Boot b76a442058a84c9e91bb8ab80ac3a4e5 -- Nov 28 00:07:48 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 00:07:48 crc kubenswrapper[3021]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 00:07:48 crc kubenswrapper[3021]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 00:07:48 crc kubenswrapper[3021]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 00:07:48 crc kubenswrapper[3021]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 00:07:48 crc kubenswrapper[3021]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 00:07:48 crc kubenswrapper[3021]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.502185 3021 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504117 3021 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504138 3021 feature_gate.go:227] unrecognized feature gate: NewOLM Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504146 3021 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504152 3021 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504166 3021 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504173 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504179 3021 feature_gate.go:227] unrecognized feature gate: GatewayAPI Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504186 3021 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504193 3021 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504200 3021 feature_gate.go:227] unrecognized feature gate: PlatformOperators Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504206 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504212 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504218 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504223 3021 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504230 3021 feature_gate.go:227] unrecognized feature gate: MetricsServer Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504235 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504241 3021 feature_gate.go:227] unrecognized feature gate: InsightsConfig Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504247 3021 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504252 3021 feature_gate.go:227] unrecognized feature gate: PinnedImages Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504258 3021 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504264 3021 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504270 3021 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504275 3021 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504280 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504286 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504292 3021 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504297 3021 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504302 3021 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504308 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504313 3021 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504319 3021 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504325 3021 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504330 3021 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504336 3021 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504342 3021 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504347 3021 feature_gate.go:227] unrecognized feature gate: ImagePolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504352 3021 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504358 3021 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504365 3021 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504370 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504376 3021 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504381 3021 feature_gate.go:227] unrecognized feature gate: SignatureStores Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504388 3021 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504393 3021 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504398 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504404 3021 feature_gate.go:227] unrecognized feature gate: Example Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504409 3021 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504415 3021 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504420 3021 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504425 3021 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504431 3021 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504436 3021 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504441 3021 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504446 3021 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504452 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504457 3021 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504463 3021 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504469 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504474 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.504480 3021 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504562 3021 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504576 3021 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504583 3021 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504589 3021 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504595 3021 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504599 3021 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504606 3021 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504612 3021 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504617 3021 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504622 3021 flags.go:64] FLAG: --azure-container-registry-config="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504626 3021 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504631 3021 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504635 3021 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504640 3021 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504648 3021 flags.go:64] FLAG: --cgroup-root="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504653 3021 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504658 3021 flags.go:64] FLAG: --client-ca-file="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504662 3021 flags.go:64] FLAG: --cloud-config="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504667 3021 flags.go:64] FLAG: --cloud-provider="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504671 3021 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504678 3021 flags.go:64] FLAG: --cluster-domain="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504682 3021 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504687 3021 flags.go:64] FLAG: --config-dir="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504691 3021 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504696 3021 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504702 3021 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504707 3021 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504712 3021 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504717 3021 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504722 3021 flags.go:64] FLAG: --contention-profiling="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504728 3021 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504732 3021 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504737 3021 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504742 3021 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504748 3021 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504753 3021 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504757 3021 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504762 3021 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504766 3021 flags.go:64] FLAG: --enable-server="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504770 3021 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504777 3021 flags.go:64] FLAG: --event-burst="100" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504782 3021 flags.go:64] FLAG: --event-qps="50" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504787 3021 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504791 3021 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504795 3021 flags.go:64] FLAG: --eviction-hard="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504801 3021 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504806 3021 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504811 3021 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504816 3021 flags.go:64] FLAG: --eviction-soft="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504820 3021 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504825 3021 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504830 3021 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504835 3021 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504839 3021 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504844 3021 flags.go:64] FLAG: --feature-gates="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504849 3021 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504854 3021 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504858 3021 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504863 3021 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504868 3021 flags.go:64] FLAG: --healthz-port="10248" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504872 3021 flags.go:64] FLAG: --help="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504877 3021 flags.go:64] FLAG: --hostname-override="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504881 3021 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504885 3021 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504890 3021 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504894 3021 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504898 3021 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504902 3021 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504907 3021 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504911 3021 flags.go:64] FLAG: --iptables-drop-bit="15" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504916 3021 flags.go:64] FLAG: --iptables-masquerade-bit="14" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504920 3021 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504925 3021 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504929 3021 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504934 3021 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504939 3021 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504943 3021 flags.go:64] FLAG: --kube-reserved="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504948 3021 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504952 3021 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504957 3021 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504962 3021 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504966 3021 flags.go:64] FLAG: --lock-file="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504970 3021 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504975 3021 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504979 3021 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504986 3021 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504991 3021 flags.go:64] FLAG: --logging-format="text" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.504996 3021 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505001 3021 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505005 3021 flags.go:64] FLAG: --manifest-url="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505010 3021 flags.go:64] FLAG: --manifest-url-header="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505016 3021 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505021 3021 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505026 3021 flags.go:64] FLAG: --max-pods="110" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505031 3021 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505035 3021 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505039 3021 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505044 3021 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505048 3021 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505052 3021 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505057 3021 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505067 3021 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505072 3021 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505076 3021 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505081 3021 flags.go:64] FLAG: --pod-cidr="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505085 3021 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505095 3021 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505099 3021 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505104 3021 flags.go:64] FLAG: --pods-per-core="0" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505108 3021 flags.go:64] FLAG: --port="10250" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505113 3021 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505118 3021 flags.go:64] FLAG: --provider-id="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505122 3021 flags.go:64] FLAG: --qos-reserved="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505127 3021 flags.go:64] FLAG: --read-only-port="10255" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505131 3021 flags.go:64] FLAG: --register-node="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505136 3021 flags.go:64] FLAG: --register-schedulable="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505140 3021 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505148 3021 flags.go:64] FLAG: --registry-burst="10" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505152 3021 flags.go:64] FLAG: --registry-qps="5" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505157 3021 flags.go:64] FLAG: --reserved-cpus="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505162 3021 flags.go:64] FLAG: --reserved-memory="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505168 3021 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505173 3021 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505178 3021 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505182 3021 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505187 3021 flags.go:64] FLAG: --runonce="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505192 3021 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505197 3021 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505202 3021 flags.go:64] FLAG: --seccomp-default="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505206 3021 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505210 3021 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505215 3021 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505220 3021 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505224 3021 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505229 3021 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505233 3021 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505238 3021 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505242 3021 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505246 3021 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505251 3021 flags.go:64] FLAG: --system-cgroups="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505256 3021 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505263 3021 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505267 3021 flags.go:64] FLAG: --tls-cert-file="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505272 3021 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505278 3021 flags.go:64] FLAG: --tls-min-version="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505283 3021 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505287 3021 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505291 3021 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505296 3021 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505300 3021 flags.go:64] FLAG: --v="2" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505306 3021 flags.go:64] FLAG: --version="false" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505312 3021 flags.go:64] FLAG: --vmodule="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505318 3021 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505323 3021 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505394 3021 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505400 3021 feature_gate.go:227] unrecognized feature gate: NewOLM Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505407 3021 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505414 3021 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505420 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505429 3021 feature_gate.go:227] unrecognized feature gate: GatewayAPI Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505434 3021 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505440 3021 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505445 3021 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505451 3021 feature_gate.go:227] unrecognized feature gate: PlatformOperators Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505456 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505461 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505467 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505473 3021 feature_gate.go:227] unrecognized feature gate: MetricsServer Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505479 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505484 3021 feature_gate.go:227] unrecognized feature gate: InsightsConfig Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505489 3021 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505495 3021 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505501 3021 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505507 3021 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505512 3021 feature_gate.go:227] unrecognized feature gate: PinnedImages Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505518 3021 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505523 3021 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505532 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505556 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505563 3021 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505568 3021 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505574 3021 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505580 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505586 3021 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505591 3021 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505597 3021 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505603 3021 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505609 3021 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505614 3021 feature_gate.go:227] unrecognized feature gate: ImagePolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505620 3021 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505625 3021 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505633 3021 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505638 3021 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505644 3021 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505650 3021 feature_gate.go:227] unrecognized feature gate: SignatureStores Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505656 3021 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505661 3021 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505666 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505672 3021 feature_gate.go:227] unrecognized feature gate: Example Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505678 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505683 3021 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505688 3021 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505694 3021 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505699 3021 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505705 3021 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505710 3021 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505716 3021 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505721 3021 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505743 3021 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505749 3021 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505755 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505761 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505767 3021 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.505772 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.505779 3021 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.516353 3021 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.516408 3021 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516519 3021 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516542 3021 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516589 3021 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516603 3021 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516619 3021 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516634 3021 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516652 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516668 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516683 3021 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516697 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516709 3021 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516722 3021 feature_gate.go:227] unrecognized feature gate: NewOLM Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516734 3021 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516746 3021 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516758 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516770 3021 feature_gate.go:227] unrecognized feature gate: GatewayAPI Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516782 3021 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516794 3021 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516806 3021 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516817 3021 feature_gate.go:227] unrecognized feature gate: PlatformOperators Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516829 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516840 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516852 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516864 3021 feature_gate.go:227] unrecognized feature gate: MetricsServer Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516876 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516888 3021 feature_gate.go:227] unrecognized feature gate: InsightsConfig Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516900 3021 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516912 3021 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516926 3021 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516941 3021 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516953 3021 feature_gate.go:227] unrecognized feature gate: PinnedImages Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516966 3021 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516978 3021 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.516990 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517002 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517013 3021 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517024 3021 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517037 3021 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517048 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517060 3021 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517072 3021 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517084 3021 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517096 3021 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517129 3021 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517141 3021 feature_gate.go:227] unrecognized feature gate: ImagePolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517152 3021 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517164 3021 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517175 3021 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517188 3021 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517200 3021 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517211 3021 feature_gate.go:227] unrecognized feature gate: SignatureStores Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517223 3021 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517235 3021 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517248 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517259 3021 feature_gate.go:227] unrecognized feature gate: Example Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517271 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517283 3021 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517294 3021 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517306 3021 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517318 3021 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.517332 3021 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517503 3021 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517518 3021 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517565 3021 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517579 3021 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517592 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517604 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517616 3021 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517628 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517640 3021 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517651 3021 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517663 3021 feature_gate.go:227] unrecognized feature gate: NewOLM Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517676 3021 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517688 3021 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517699 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517711 3021 feature_gate.go:227] unrecognized feature gate: GatewayAPI Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517722 3021 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517734 3021 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517746 3021 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517758 3021 feature_gate.go:227] unrecognized feature gate: PlatformOperators Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517769 3021 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517781 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517792 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517804 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517817 3021 feature_gate.go:227] unrecognized feature gate: InsightsConfig Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517829 3021 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517840 3021 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517853 3021 feature_gate.go:227] unrecognized feature gate: MetricsServer Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517865 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517877 3021 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517889 3021 feature_gate.go:227] unrecognized feature gate: PinnedImages Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517901 3021 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517912 3021 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517924 3021 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517937 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517949 3021 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517961 3021 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517973 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517985 3021 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.517997 3021 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518009 3021 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518023 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518035 3021 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518047 3021 feature_gate.go:227] unrecognized feature gate: ImagePolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518059 3021 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518071 3021 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518082 3021 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518094 3021 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518106 3021 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518118 3021 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518129 3021 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518141 3021 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518152 3021 feature_gate.go:227] unrecognized feature gate: Example Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518164 3021 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518192 3021 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518204 3021 feature_gate.go:227] unrecognized feature gate: SignatureStores Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518216 3021 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518228 3021 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518239 3021 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518252 3021 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.518264 3021 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.518276 3021 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.518902 3021 server.go:925] "Client rotation is on, will bootstrap in background" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.524453 3021 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.525253 3021 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.525813 3021 server.go:982] "Starting client certificate rotation" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.525832 3021 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.526183 3021 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-05 16:38:26.91047295 +0000 UTC Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.526312 3021 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 3088h30m38.384166633s for next certificate rotation Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.532605 3021 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.537781 3021 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.538299 3021 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.553214 3021 remote_runtime.go:143] "Validated CRI v1 runtime API" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.553293 3021 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.573038 3021 remote_image.go:111] "Validated CRI v1 image API" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.578308 3021 fs.go:132] Filesystem UUIDs: map[2025-11-28-00-07-21-00:/dev/sr0 68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.578357 3021 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/0:{mountpoint:/run/user/0 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.610670 3021 manager.go:217] Machine: {Timestamp:2025-11-28 00:07:48.60770485 +0000 UTC m=+0.257396805 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:b43e451d-7b03-476c-9a13-16cc174618c5 BootID:b76a4420-58a8-4c9e-91bb-8ab80ac3a4e5 Filesystems:[{Device:/run/user/0 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680320 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:46:ba:73 Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:46:ba:73 Speed:-1 Mtu:1500} {Name:eth10 MacAddress:2e:fd:8b:c0:c5:4e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:52:3b:54:1a:a2:5b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.611069 3021 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.611186 3021 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.612819 3021 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.613240 3021 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.613973 3021 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.614030 3021 container_manager_linux.go:304] "Creating device plugin manager" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.614263 3021 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.614684 3021 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.615729 3021 state_mem.go:36] "Initialized new in-memory state store" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.615875 3021 server.go:1227] "Using root directory" path="/var/lib/kubelet" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.617041 3021 kubelet.go:406] "Attempting to sync node with API server" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.617096 3021 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.617140 3021 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.617175 3021 kubelet.go:322] "Adding apiserver pod source" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.617249 3021 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.619776 3021 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.621271 3021 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.622079 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622209 3021 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.622246 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.622196 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.622370 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622519 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622588 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622602 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622621 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622634 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622653 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622666 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622678 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622693 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622704 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622722 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622757 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622804 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622839 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.622855 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.623180 3021 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.623891 3021 server.go:1262] "Started kubelet" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.626031 3021 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.626129 3021 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.626393 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.629076 3021 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.629212 3021 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.629296 3021 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-30 22:43:18.314813422 +0000 UTC Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.629372 3021 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 3694h35m29.685448417s for next certificate rotation Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.630164 3021 volume_manager.go:289] "The desired_state_of_world populator starts" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.630397 3021 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.630967 3021 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.631408 3021 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.632867 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.633083 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.633508 3021 server.go:461] "Adding debug handlers to kubelet server" Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.634401 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.634619 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="200ms" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.638946 3021 factory.go:153] Registering CRI-O factory Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.639024 3021 factory.go:221] Registration of the crio container factory successfully Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.639276 3021 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.639309 3021 factory.go:55] Registering systemd factory Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.639322 3021 factory.go:221] Registration of the systemd container factory successfully Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.639350 3021 factory.go:103] Registering Raw factory Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.639371 3021 manager.go:1196] Started watching for new ooms in manager Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.654597 3021 manager.go:319] Starting recovery of all containers Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.658914 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.658960 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.658980 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.658997 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659016 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659032 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659050 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659066 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659085 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659102 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659138 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659156 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659174 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659192 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659229 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659247 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659263 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659281 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659299 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659316 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659334 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659352 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659370 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659387 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659410 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659429 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659468 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659503 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659522 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659566 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659588 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659622 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659642 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659661 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659678 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659696 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659715 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659733 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659751 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659769 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659819 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659860 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659880 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659900 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659916 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659933 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659950 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659967 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.659991 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.660009 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.660034 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.660052 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.660071 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.661502 3021 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.661665 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664350 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664426 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664457 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664481 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664502 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664524 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664566 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664588 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664610 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664657 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664678 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664698 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664716 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664737 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664758 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664788 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664807 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664825 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664841 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664861 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664878 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664918 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664940 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664961 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664982 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.664999 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665018 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665059 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665093 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665114 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665136 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665153 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665174 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665194 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665212 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665231 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665257 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665275 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665292 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665313 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665352 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665378 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665426 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665449 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665470 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665492 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665513 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665553 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665578 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665599 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665619 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665643 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665663 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665681 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665697 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665716 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665733 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665758 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665777 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665830 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665850 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665868 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665885 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665906 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665925 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665943 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665962 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.665979 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666006 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666027 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666045 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666069 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666088 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666105 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666123 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666143 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666163 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666182 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666200 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666218 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666238 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666257 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666276 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666295 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666314 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666333 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666351 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666370 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666388 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666405 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666423 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666440 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666458 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666476 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666494 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666513 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666535 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666573 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666592 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666612 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666632 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666649 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666666 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666688 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666714 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666732 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666750 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666768 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666787 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666807 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666927 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666952 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666973 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.666993 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667012 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667033 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667053 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667077 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667097 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667116 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667136 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667155 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667176 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667195 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667214 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667232 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667311 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667335 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667357 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667374 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667392 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667414 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667432 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667450 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667468 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667487 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667513 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667551 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667586 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667610 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667633 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667656 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667679 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667699 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667720 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667746 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667768 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667793 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667816 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667840 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667861 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667884 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667905 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667928 3021 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667944 3021 reconstruct_new.go:102] "Volume reconstruction finished" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.667960 3021 reconciler_new.go:29] "Reconciler: start to sync state" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.686492 3021 manager.go:324] Recovery completed Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.715805 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.717978 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.718026 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.718044 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.719101 3021 cpu_manager.go:215] "Starting CPU manager" policy="none" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.719135 3021 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.719162 3021 state_mem.go:36] "Initialized new in-memory state store" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.731004 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.732332 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.732376 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.732392 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.732426 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.734333 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.741388 3021 policy_none.go:49] "None policy: Start" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.742836 3021 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.742973 3021 state_mem.go:35] "Initializing new in-memory state store" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.800158 3021 manager.go:296] "Starting Device Plugin manager" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.800966 3021 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.800983 3021 server.go:79] "Starting device plugin registration server" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.801729 3021 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.801828 3021 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.801837 3021 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.823817 3021 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.826089 3021 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.826306 3021 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.826353 3021 kubelet.go:2343] "Starting kubelet main sync loop" Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.826412 3021 kubelet.go:2367] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 28 00:07:48 crc kubenswrapper[3021]: W1128 00:07:48.828780 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.828863 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.838016 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="400ms" Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.872499 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.930284 3021 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.930474 3021 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.930573 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.932426 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.932468 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.932482 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.932619 3021 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.932662 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.932992 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.933097 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.933761 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.933807 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.933828 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.933961 3021 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.934050 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.934289 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.934378 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.934404 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.934453 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.934406 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.934589 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935273 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935304 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935316 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935471 3021 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935539 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935596 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935648 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935850 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935902 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935928 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.935987 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.936052 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.936091 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.936115 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937096 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937152 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937180 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937209 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937240 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937260 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937381 3021 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937452 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937670 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.937715 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: E1128 00:07:48.937941 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.938989 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.939053 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.939070 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.939372 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.939443 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.939474 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.939825 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.939914 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.941007 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.941046 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.941061 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.975388 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:07:48 crc kubenswrapper[3021]: I1128 00:07:48.975493 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.077313 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078088 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078146 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078194 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078242 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078271 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078299 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078331 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078599 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.078943 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.079045 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.079239 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.079321 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.079405 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.079495 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.079601 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.079645 3021 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182686 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182743 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182770 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182794 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182819 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182840 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182862 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182883 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182902 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182920 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182941 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182962 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.182987 3021 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183425 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183463 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183488 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183511 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183556 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183582 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183607 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183632 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183659 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183684 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183709 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183769 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.183912 3021 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: E1128 00:07:49.240595 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="800ms" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.272441 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.297996 3021 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-0134198dabe1361b7d13ef51fe926ea945f88a8ea70f1f3c01d86fdb6aadcfdf WatchSource:0}: Error finding container 0134198dabe1361b7d13ef51fe926ea945f88a8ea70f1f3c01d86fdb6aadcfdf: Status 404 returned error can't find the container with id 0134198dabe1361b7d13ef51fe926ea945f88a8ea70f1f3c01d86fdb6aadcfdf Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.301055 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.318984 3021 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-e5d53cc7cc7c1d82d1263aa2f77c18019f60e88fc6107fec6ad5adbd7b426ba5 WatchSource:0}: Error finding container e5d53cc7cc7c1d82d1263aa2f77c18019f60e88fc6107fec6ad5adbd7b426ba5: Status 404 returned error can't find the container with id e5d53cc7cc7c1d82d1263aa2f77c18019f60e88fc6107fec6ad5adbd7b426ba5 Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.338365 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.339781 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.339818 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.339830 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.339858 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:07:49 crc kubenswrapper[3021]: E1128 00:07:49.341549 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.350538 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.359300 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.361042 3021 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae85115fdc231b4002b57317b41a6400.slice/crio-e387755ada077bd4ce9c3a142ba9d4dc197a6ff5adf08a3f0f0bfbaa4dbd2d27 WatchSource:0}: Error finding container e387755ada077bd4ce9c3a142ba9d4dc197a6ff5adf08a3f0f0bfbaa4dbd2d27: Status 404 returned error can't find the container with id e387755ada077bd4ce9c3a142ba9d4dc197a6ff5adf08a3f0f0bfbaa4dbd2d27 Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.361181 3021 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.372059 3021 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd6a3a59e513625ca0ae3724df2686bc.slice/crio-2044bff946589216335f80568855472a171ce638d999c4396024ee024f77a2c6 WatchSource:0}: Error finding container 2044bff946589216335f80568855472a171ce638d999c4396024ee024f77a2c6: Status 404 returned error can't find the container with id 2044bff946589216335f80568855472a171ce638d999c4396024ee024f77a2c6 Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.373552 3021 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a57a7fb1944b43a6bd11a349520d301.slice/crio-8423d885f98d93b38fcc289f7463e6bc857620781d06326183a7de35c5c124fe WatchSource:0}: Error finding container 8423d885f98d93b38fcc289f7463e6bc857620781d06326183a7de35c5c124fe: Status 404 returned error can't find the container with id 8423d885f98d93b38fcc289f7463e6bc857620781d06326183a7de35c5c124fe Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.629189 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.691872 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:49 crc kubenswrapper[3021]: E1128 00:07:49.692237 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.832699 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"d998508535e3dc9018350eff9a306da42816e899f5122d21bcfb17aa159aa54d"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.832764 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"8423d885f98d93b38fcc289f7463e6bc857620781d06326183a7de35c5c124fe"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.835596 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"134c11b7b43fbb14f1cce153682fd54187b4bde4d3018cb25d166d6cd9373fb9"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.835624 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"2044bff946589216335f80568855472a171ce638d999c4396024ee024f77a2c6"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.837157 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.837180 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"e387755ada077bd4ce9c3a142ba9d4dc197a6ff5adf08a3f0f0bfbaa4dbd2d27"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.838743 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"6c4e6e1a856413d0b5584ca81e32a3013e61f66afd93793212c7480be0fb2860"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.838769 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"e5d53cc7cc7c1d82d1263aa2f77c18019f60e88fc6107fec6ad5adbd7b426ba5"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.838918 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.840328 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"0134198dabe1361b7d13ef51fe926ea945f88a8ea70f1f3c01d86fdb6aadcfdf"} Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.840451 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.840505 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:49 crc kubenswrapper[3021]: I1128 00:07:49.840519 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.883737 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:49 crc kubenswrapper[3021]: E1128 00:07:49.883831 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.893787 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:49 crc kubenswrapper[3021]: E1128 00:07:49.893860 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:49 crc kubenswrapper[3021]: W1128 00:07:49.935857 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:49 crc kubenswrapper[3021]: E1128 00:07:49.935934 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:50 crc kubenswrapper[3021]: E1128 00:07:50.043179 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="1.6s" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.142739 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.144579 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.144642 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.144657 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.144731 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:07:50 crc kubenswrapper[3021]: E1128 00:07:50.146770 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.628814 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.845103 3021 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="d998508535e3dc9018350eff9a306da42816e899f5122d21bcfb17aa159aa54d" exitCode=0 Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.845195 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"d998508535e3dc9018350eff9a306da42816e899f5122d21bcfb17aa159aa54d"} Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.845230 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.847341 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.847400 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.847422 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.852800 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"82168fccc80618d9c77537a13dce4d6b21212229fce6c89d40d300292a451c75"} Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.852851 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"a735462cadc87dc499b32ef56a9a033b0b833c9cf2e088984e3d9e6ba412c729"} Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.852869 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"43d55fade3564eb046bfa2d057958ee51fb6e6a79a1a8c556632c000a5b98f29"} Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.853412 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.854974 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.855011 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.855027 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.859221 3021 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75" exitCode=0 Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.859383 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.859895 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75"} Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.860374 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.860642 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.860863 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.862052 3021 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="6c4e6e1a856413d0b5584ca81e32a3013e61f66afd93793212c7480be0fb2860" exitCode=0 Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.862116 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"6c4e6e1a856413d0b5584ca81e32a3013e61f66afd93793212c7480be0fb2860"} Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.862235 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.863091 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.863157 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.863172 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.863790 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.866904 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.866935 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.866949 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.870852 3021 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="c0063767f77b4f4b1abf0fc2bc00937096d84f0aac33bb95f5ea7a9316a32729" exitCode=0 Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.870908 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"c0063767f77b4f4b1abf0fc2bc00937096d84f0aac33bb95f5ea7a9316a32729"} Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.871020 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.875729 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.875768 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:50 crc kubenswrapper[3021]: I1128 00:07:50.875781 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.628730 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:51 crc kubenswrapper[3021]: E1128 00:07:51.645732 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="3.2s" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.747667 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.749172 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.749228 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.749242 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.749281 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:07:51 crc kubenswrapper[3021]: E1128 00:07:51.750958 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.883048 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"d20f398aee9000ef80b9103afff852b23e40bf72a44197c678a2950d4e729477"} Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.883083 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.883104 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"f70e5e291a7f354c30ef60366c611c1336c35a8b4b6fa3f2c056096a650743b6"} Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.883124 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"3f2ef12f7925192a348472c1a90173b0faae9011d16749c90a7a4b7c5dba28ef"} Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.883985 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.884023 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.884040 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.895826 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"3591f295d30983c04e7835762f552a23df79c107a576d1a1b68164323f3b29e4"} Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.895869 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"c806377c89c0ce691a5cb179d3187ae4f02b46440c24281233071fbb06b4366b"} Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.895881 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"9c1416b8c6a466079801f9be5d7b27550ab5fd354573f9b32cae64e01ed3f695"} Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.897959 3021 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="fb15f607ae36b6138aa4ac4040b238a0eae52b7b6fec19ccd396d07ffd436bc8" exitCode=0 Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.898005 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"fb15f607ae36b6138aa4ac4040b238a0eae52b7b6fec19ccd396d07ffd436bc8"} Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.898157 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.899253 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.899285 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.899297 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.899764 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.899816 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.899975 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"eb2e837eac256e4bf3cde71c99525bb4b84d85303ff11c722ad2bcc902cb7931"} Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.900608 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.900649 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.900665 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.900745 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.900777 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:51 crc kubenswrapper[3021]: I1128 00:07:51.900789 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.005050 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:07:52 crc kubenswrapper[3021]: W1128 00:07:52.057231 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: E1128 00:07:52.057312 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: W1128 00:07:52.622169 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: E1128 00:07:52.622296 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.628950 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: W1128 00:07:52.765673 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: E1128 00:07:52.765761 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: W1128 00:07:52.837815 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: E1128 00:07:52.837923 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.909241 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"760aee346ddb22427580c02a49a3a1d5ea831c51adeed5dfc8845d170af2f288"} Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.909290 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"4ca881e1dddf4d4356899329bc9b2b3ff4ab72b9778cec4323d14c4bb43cf3e1"} Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.909288 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.911049 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.911151 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.911179 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.912599 3021 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="16643b03bba9f21a956e3c4425c4ff09679a7b4144ffcd009e565ae6acf2616a" exitCode=0 Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.912712 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"16643b03bba9f21a956e3c4425c4ff09679a7b4144ffcd009e565ae6acf2616a"} Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.912776 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.912800 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.912913 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914168 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914206 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914246 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914214 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914308 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914312 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914334 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914345 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:52 crc kubenswrapper[3021]: I1128 00:07:52.914369 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:53 crc kubenswrapper[3021]: E1128 00:07:53.106041 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.143747 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.176674 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.591680 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.591926 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.597882 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.597947 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.598049 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.605074 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.628846 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.920845 3021 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.920876 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.920892 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.920935 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"c2ea6296444b21fecf972e8ae0570e8f15bd9d759c7f10e08e292bc7ef4cf0f8"} Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.921022 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"92af5cde9602781bc40dbf0d140c3fe6165c74431e29289f8335ed99dfbe3c19"} Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.921052 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"c82d2474ae931fedc188c178352b4b4722e9f59f89960a7ba33ee20ab018f91f"} Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.921066 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922339 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922385 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922406 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922453 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922523 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922546 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922663 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922692 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:53 crc kubenswrapper[3021]: I1128 00:07:53.922704 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.138490 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.540936 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.628531 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:54 crc kubenswrapper[3021]: E1128 00:07:54.847912 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="6.4s" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.928966 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"17d9838eebf5eaf407d82cc93e140fa466bc6f08f7bac6f028b3b9c7414aa737"} Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.928998 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.929156 3021 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.929226 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.929263 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.930702 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.930765 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.930784 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.930809 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.930845 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.930865 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.931823 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.931913 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.931933 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.951309 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.952416 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.952457 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.952509 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:54 crc kubenswrapper[3021]: I1128 00:07:54.952551 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:07:54 crc kubenswrapper[3021]: E1128 00:07:54.954117 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:07:55 crc kubenswrapper[3021]: W1128 00:07:55.342937 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:55 crc kubenswrapper[3021]: E1128 00:07:55.343044 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.548387 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.628559 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.932155 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.932163 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.933159 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.933338 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.933367 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.933381 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.934293 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.934324 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.934341 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.934360 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.934363 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:55 crc kubenswrapper[3021]: I1128 00:07:55.934388 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:56 crc kubenswrapper[3021]: W1128 00:07:56.576206 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:56 crc kubenswrapper[3021]: E1128 00:07:56.576322 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:56 crc kubenswrapper[3021]: I1128 00:07:56.628831 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:56 crc kubenswrapper[3021]: I1128 00:07:56.653708 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 00:07:56 crc kubenswrapper[3021]: W1128 00:07:56.845831 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:56 crc kubenswrapper[3021]: E1128 00:07:56.845930 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:56 crc kubenswrapper[3021]: I1128 00:07:56.934772 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:56 crc kubenswrapper[3021]: I1128 00:07:56.936036 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:56 crc kubenswrapper[3021]: I1128 00:07:56.936104 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:56 crc kubenswrapper[3021]: I1128 00:07:56.936134 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:57 crc kubenswrapper[3021]: W1128 00:07:57.482119 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:57 crc kubenswrapper[3021]: E1128 00:07:57.482443 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:57 crc kubenswrapper[3021]: I1128 00:07:57.541110 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:07:57 crc kubenswrapper[3021]: I1128 00:07:57.541278 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:07:57 crc kubenswrapper[3021]: I1128 00:07:57.628535 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:58 crc kubenswrapper[3021]: I1128 00:07:58.628393 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:58 crc kubenswrapper[3021]: I1128 00:07:58.647419 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 00:07:58 crc kubenswrapper[3021]: I1128 00:07:58.647854 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:58 crc kubenswrapper[3021]: I1128 00:07:58.650105 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:58 crc kubenswrapper[3021]: I1128 00:07:58.650177 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:58 crc kubenswrapper[3021]: I1128 00:07:58.650200 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:58 crc kubenswrapper[3021]: E1128 00:07:58.872765 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.482058 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.482283 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.484085 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.484163 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.484185 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.490046 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.628549 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.944139 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.945548 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.945751 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:07:59 crc kubenswrapper[3021]: I1128 00:07:59.945776 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:00 crc kubenswrapper[3021]: I1128 00:08:00.629556 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:01 crc kubenswrapper[3021]: E1128 00:08:01.250740 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:01 crc kubenswrapper[3021]: I1128 00:08:01.355078 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:01 crc kubenswrapper[3021]: I1128 00:08:01.356784 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:01 crc kubenswrapper[3021]: I1128 00:08:01.356856 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:01 crc kubenswrapper[3021]: I1128 00:08:01.356879 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:01 crc kubenswrapper[3021]: I1128 00:08:01.356926 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:01 crc kubenswrapper[3021]: E1128 00:08:01.358734 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:01 crc kubenswrapper[3021]: I1128 00:08:01.628139 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:02 crc kubenswrapper[3021]: I1128 00:08:02.629106 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:02 crc kubenswrapper[3021]: I1128 00:08:02.736486 3021 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Nov 28 00:08:02 crc kubenswrapper[3021]: I1128 00:08:02.737115 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 00:08:02 crc kubenswrapper[3021]: I1128 00:08:02.743280 3021 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Nov 28 00:08:02 crc kubenswrapper[3021]: I1128 00:08:02.743355 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 00:08:03 crc kubenswrapper[3021]: E1128 00:08:03.108548 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:08:03 crc kubenswrapper[3021]: W1128 00:08:03.623667 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:03 crc kubenswrapper[3021]: E1128 00:08:03.623772 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:03 crc kubenswrapper[3021]: I1128 00:08:03.627846 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:04 crc kubenswrapper[3021]: W1128 00:08:04.312905 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:04 crc kubenswrapper[3021]: E1128 00:08:04.313647 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:04 crc kubenswrapper[3021]: I1128 00:08:04.628985 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:04 crc kubenswrapper[3021]: W1128 00:08:04.812493 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:04 crc kubenswrapper[3021]: E1128 00:08:04.812594 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.550037 3021 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.550135 3021 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.628194 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.753964 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.754247 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.754950 3021 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.755036 3021 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.757321 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.757357 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.757369 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.760963 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.960391 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.960898 3021 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.960998 3021 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.961557 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.961603 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:05 crc kubenswrapper[3021]: I1128 00:08:05.961619 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:06 crc kubenswrapper[3021]: W1128 00:08:06.548181 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:06 crc kubenswrapper[3021]: E1128 00:08:06.548295 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.629045 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.709107 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.709501 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.711506 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.711576 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.711608 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.738815 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.963002 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.964262 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.964358 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:06 crc kubenswrapper[3021]: I1128 00:08:06.964399 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:07 crc kubenswrapper[3021]: I1128 00:08:07.542108 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:08:07 crc kubenswrapper[3021]: I1128 00:08:07.542240 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 00:08:07 crc kubenswrapper[3021]: I1128 00:08:07.628658 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:08 crc kubenswrapper[3021]: E1128 00:08:08.252856 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:08 crc kubenswrapper[3021]: I1128 00:08:08.359873 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:08 crc kubenswrapper[3021]: I1128 00:08:08.361591 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:08 crc kubenswrapper[3021]: I1128 00:08:08.361649 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:08 crc kubenswrapper[3021]: I1128 00:08:08.361670 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:08 crc kubenswrapper[3021]: I1128 00:08:08.361711 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:08 crc kubenswrapper[3021]: E1128 00:08:08.375219 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:08 crc kubenswrapper[3021]: I1128 00:08:08.628961 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:08 crc kubenswrapper[3021]: E1128 00:08:08.887241 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:08:09 crc kubenswrapper[3021]: I1128 00:08:09.628368 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:10 crc kubenswrapper[3021]: I1128 00:08:10.628826 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:11 crc kubenswrapper[3021]: I1128 00:08:11.628819 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:12 crc kubenswrapper[3021]: I1128 00:08:12.629119 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:13 crc kubenswrapper[3021]: E1128 00:08:13.111028 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:08:13 crc kubenswrapper[3021]: I1128 00:08:13.628779 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:14 crc kubenswrapper[3021]: I1128 00:08:14.628587 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:15 crc kubenswrapper[3021]: E1128 00:08:15.255277 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.375990 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.377531 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.377574 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.377592 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.377625 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:15 crc kubenswrapper[3021]: E1128 00:08:15.379089 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.554425 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.554641 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.555934 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.555976 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.555995 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:15 crc kubenswrapper[3021]: I1128 00:08:15.629083 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:16 crc kubenswrapper[3021]: I1128 00:08:16.629546 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.541313 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.541519 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.541601 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.541813 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.543328 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.543382 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.543408 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.551149 3021 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"43d55fade3564eb046bfa2d057958ee51fb6e6a79a1a8c556632c000a5b98f29"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.551813 3021 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://43d55fade3564eb046bfa2d057958ee51fb6e6a79a1a8c556632c000a5b98f29" gracePeriod=30 Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.628493 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:17 crc kubenswrapper[3021]: I1128 00:08:17.999270 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/1.log" Nov 28 00:08:18 crc kubenswrapper[3021]: I1128 00:08:18.000369 3021 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="43d55fade3564eb046bfa2d057958ee51fb6e6a79a1a8c556632c000a5b98f29" exitCode=255 Nov 28 00:08:18 crc kubenswrapper[3021]: I1128 00:08:18.000435 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"43d55fade3564eb046bfa2d057958ee51fb6e6a79a1a8c556632c000a5b98f29"} Nov 28 00:08:18 crc kubenswrapper[3021]: I1128 00:08:18.000496 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"215dc076257a5643b403f8977aca6ca0da008e98eedc3d45b7d2cf9c87c46f4b"} Nov 28 00:08:18 crc kubenswrapper[3021]: I1128 00:08:18.000645 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:18 crc kubenswrapper[3021]: I1128 00:08:18.002089 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:18 crc kubenswrapper[3021]: I1128 00:08:18.002247 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:18 crc kubenswrapper[3021]: I1128 00:08:18.002277 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:18 crc kubenswrapper[3021]: W1128 00:08:18.256077 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:18 crc kubenswrapper[3021]: E1128 00:08:18.256722 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:18 crc kubenswrapper[3021]: I1128 00:08:18.628001 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:18 crc kubenswrapper[3021]: E1128 00:08:18.888481 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:08:19 crc kubenswrapper[3021]: I1128 00:08:19.482302 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:08:19 crc kubenswrapper[3021]: I1128 00:08:19.482676 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:19 crc kubenswrapper[3021]: I1128 00:08:19.484841 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:19 crc kubenswrapper[3021]: I1128 00:08:19.484893 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:19 crc kubenswrapper[3021]: I1128 00:08:19.484921 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:19 crc kubenswrapper[3021]: W1128 00:08:19.559314 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:19 crc kubenswrapper[3021]: E1128 00:08:19.559511 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:19 crc kubenswrapper[3021]: I1128 00:08:19.629014 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:20 crc kubenswrapper[3021]: I1128 00:08:20.629183 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:21 crc kubenswrapper[3021]: I1128 00:08:21.628625 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:22 crc kubenswrapper[3021]: E1128 00:08:22.257495 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:22 crc kubenswrapper[3021]: I1128 00:08:22.379821 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:22 crc kubenswrapper[3021]: I1128 00:08:22.381727 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:22 crc kubenswrapper[3021]: I1128 00:08:22.381820 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:22 crc kubenswrapper[3021]: I1128 00:08:22.381843 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:22 crc kubenswrapper[3021]: I1128 00:08:22.381893 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:22 crc kubenswrapper[3021]: E1128 00:08:22.383594 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:22 crc kubenswrapper[3021]: I1128 00:08:22.629201 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:23 crc kubenswrapper[3021]: E1128 00:08:23.113522 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:08:23 crc kubenswrapper[3021]: I1128 00:08:23.629053 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:24 crc kubenswrapper[3021]: I1128 00:08:24.541172 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:08:24 crc kubenswrapper[3021]: I1128 00:08:24.541512 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:24 crc kubenswrapper[3021]: I1128 00:08:24.543398 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:24 crc kubenswrapper[3021]: I1128 00:08:24.543513 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:24 crc kubenswrapper[3021]: I1128 00:08:24.543537 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:24 crc kubenswrapper[3021]: I1128 00:08:24.629071 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:25 crc kubenswrapper[3021]: I1128 00:08:25.628630 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:25 crc kubenswrapper[3021]: W1128 00:08:25.675158 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:25 crc kubenswrapper[3021]: E1128 00:08:25.675262 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:26 crc kubenswrapper[3021]: I1128 00:08:26.628717 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:27 crc kubenswrapper[3021]: I1128 00:08:27.542066 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:08:27 crc kubenswrapper[3021]: I1128 00:08:27.542246 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:08:27 crc kubenswrapper[3021]: W1128 00:08:27.548947 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:27 crc kubenswrapper[3021]: E1128 00:08:27.549049 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:27 crc kubenswrapper[3021]: I1128 00:08:27.629289 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:28 crc kubenswrapper[3021]: I1128 00:08:28.628545 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:28 crc kubenswrapper[3021]: E1128 00:08:28.889515 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:08:29 crc kubenswrapper[3021]: E1128 00:08:29.261024 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:29 crc kubenswrapper[3021]: I1128 00:08:29.385027 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:29 crc kubenswrapper[3021]: I1128 00:08:29.387111 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:29 crc kubenswrapper[3021]: I1128 00:08:29.387180 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:29 crc kubenswrapper[3021]: I1128 00:08:29.387193 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:29 crc kubenswrapper[3021]: I1128 00:08:29.387234 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:29 crc kubenswrapper[3021]: E1128 00:08:29.389003 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:29 crc kubenswrapper[3021]: I1128 00:08:29.628660 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:30 crc kubenswrapper[3021]: I1128 00:08:30.628913 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:31 crc kubenswrapper[3021]: I1128 00:08:31.628425 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:32 crc kubenswrapper[3021]: I1128 00:08:32.629439 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:33 crc kubenswrapper[3021]: E1128 00:08:33.115873 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:08:33 crc kubenswrapper[3021]: I1128 00:08:33.628491 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:34 crc kubenswrapper[3021]: I1128 00:08:34.627943 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:35 crc kubenswrapper[3021]: I1128 00:08:35.628597 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:36 crc kubenswrapper[3021]: E1128 00:08:36.263325 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:36 crc kubenswrapper[3021]: I1128 00:08:36.389228 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:36 crc kubenswrapper[3021]: I1128 00:08:36.391742 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:36 crc kubenswrapper[3021]: I1128 00:08:36.391816 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:36 crc kubenswrapper[3021]: I1128 00:08:36.391836 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:36 crc kubenswrapper[3021]: I1128 00:08:36.391876 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:36 crc kubenswrapper[3021]: E1128 00:08:36.393502 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:36 crc kubenswrapper[3021]: I1128 00:08:36.628391 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:37 crc kubenswrapper[3021]: I1128 00:08:37.542866 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:08:37 crc kubenswrapper[3021]: I1128 00:08:37.543016 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:08:37 crc kubenswrapper[3021]: I1128 00:08:37.628678 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:38 crc kubenswrapper[3021]: I1128 00:08:38.628063 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:38 crc kubenswrapper[3021]: E1128 00:08:38.889808 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:08:39 crc kubenswrapper[3021]: I1128 00:08:39.629271 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:40 crc kubenswrapper[3021]: I1128 00:08:40.628983 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:41 crc kubenswrapper[3021]: I1128 00:08:41.628422 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:42 crc kubenswrapper[3021]: I1128 00:08:42.013032 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:08:42 crc kubenswrapper[3021]: I1128 00:08:42.013179 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:42 crc kubenswrapper[3021]: I1128 00:08:42.014260 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:42 crc kubenswrapper[3021]: I1128 00:08:42.014292 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:42 crc kubenswrapper[3021]: I1128 00:08:42.014302 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:42 crc kubenswrapper[3021]: I1128 00:08:42.627979 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:43 crc kubenswrapper[3021]: E1128 00:08:43.118373 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:08:43 crc kubenswrapper[3021]: E1128 00:08:43.266163 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:43 crc kubenswrapper[3021]: I1128 00:08:43.394579 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:43 crc kubenswrapper[3021]: I1128 00:08:43.396522 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:43 crc kubenswrapper[3021]: I1128 00:08:43.396561 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:43 crc kubenswrapper[3021]: I1128 00:08:43.396573 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:43 crc kubenswrapper[3021]: I1128 00:08:43.396598 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:43 crc kubenswrapper[3021]: E1128 00:08:43.398069 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:43 crc kubenswrapper[3021]: I1128 00:08:43.628890 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:44 crc kubenswrapper[3021]: I1128 00:08:44.628920 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:45 crc kubenswrapper[3021]: I1128 00:08:45.628173 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:46 crc kubenswrapper[3021]: I1128 00:08:46.628248 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.541653 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.542067 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.542189 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.542425 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.543785 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.543886 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.543963 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.545562 3021 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"215dc076257a5643b403f8977aca6ca0da008e98eedc3d45b7d2cf9c87c46f4b"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.545905 3021 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://215dc076257a5643b403f8977aca6ca0da008e98eedc3d45b7d2cf9c87c46f4b" gracePeriod=30 Nov 28 00:08:47 crc kubenswrapper[3021]: I1128 00:08:47.628677 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.094631 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.096588 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/1.log" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.097081 3021 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="215dc076257a5643b403f8977aca6ca0da008e98eedc3d45b7d2cf9c87c46f4b" exitCode=255 Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.097154 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"215dc076257a5643b403f8977aca6ca0da008e98eedc3d45b7d2cf9c87c46f4b"} Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.097194 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"4d01bd1f42837a1269976254956978b7f23936035e0df49183a29a473485225c"} Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.097237 3021 scope.go:117] "RemoveContainer" containerID="43d55fade3564eb046bfa2d057958ee51fb6e6a79a1a8c556632c000a5b98f29" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.097414 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.099827 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.099892 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.099913 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.628277 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.630824 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.630922 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.630978 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.631018 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:08:48 crc kubenswrapper[3021]: I1128 00:08:48.631048 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:08:48 crc kubenswrapper[3021]: E1128 00:08:48.890782 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:08:49 crc kubenswrapper[3021]: I1128 00:08:49.102177 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Nov 28 00:08:49 crc kubenswrapper[3021]: I1128 00:08:49.482479 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:08:49 crc kubenswrapper[3021]: I1128 00:08:49.483223 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:49 crc kubenswrapper[3021]: I1128 00:08:49.484343 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:49 crc kubenswrapper[3021]: I1128 00:08:49.484373 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:49 crc kubenswrapper[3021]: I1128 00:08:49.484382 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:49 crc kubenswrapper[3021]: I1128 00:08:49.629753 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:50 crc kubenswrapper[3021]: E1128 00:08:50.267909 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:50 crc kubenswrapper[3021]: I1128 00:08:50.398751 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:50 crc kubenswrapper[3021]: I1128 00:08:50.399958 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:50 crc kubenswrapper[3021]: I1128 00:08:50.399998 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:50 crc kubenswrapper[3021]: I1128 00:08:50.400012 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:50 crc kubenswrapper[3021]: I1128 00:08:50.400044 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:50 crc kubenswrapper[3021]: E1128 00:08:50.401132 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:50 crc kubenswrapper[3021]: I1128 00:08:50.628134 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:51 crc kubenswrapper[3021]: I1128 00:08:51.629167 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:52 crc kubenswrapper[3021]: I1128 00:08:52.628670 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:53 crc kubenswrapper[3021]: E1128 00:08:53.121073 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:08:53 crc kubenswrapper[3021]: I1128 00:08:53.628261 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:54 crc kubenswrapper[3021]: I1128 00:08:54.540947 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:08:54 crc kubenswrapper[3021]: I1128 00:08:54.541168 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:54 crc kubenswrapper[3021]: I1128 00:08:54.542829 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:54 crc kubenswrapper[3021]: I1128 00:08:54.542908 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:54 crc kubenswrapper[3021]: I1128 00:08:54.542924 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:54 crc kubenswrapper[3021]: I1128 00:08:54.628706 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:55 crc kubenswrapper[3021]: I1128 00:08:55.628341 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:56 crc kubenswrapper[3021]: I1128 00:08:56.628870 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:56 crc kubenswrapper[3021]: W1128 00:08:56.738444 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:56 crc kubenswrapper[3021]: E1128 00:08:56.738627 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:57 crc kubenswrapper[3021]: E1128 00:08:57.269939 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:08:57 crc kubenswrapper[3021]: I1128 00:08:57.401346 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:08:57 crc kubenswrapper[3021]: I1128 00:08:57.403304 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:08:57 crc kubenswrapper[3021]: I1128 00:08:57.403389 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:08:57 crc kubenswrapper[3021]: I1128 00:08:57.403423 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:08:57 crc kubenswrapper[3021]: I1128 00:08:57.403536 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:08:57 crc kubenswrapper[3021]: E1128 00:08:57.405277 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:08:57 crc kubenswrapper[3021]: I1128 00:08:57.541800 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:08:57 crc kubenswrapper[3021]: I1128 00:08:57.541976 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:08:57 crc kubenswrapper[3021]: I1128 00:08:57.628676 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:58 crc kubenswrapper[3021]: I1128 00:08:58.628982 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:58 crc kubenswrapper[3021]: E1128 00:08:58.891307 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:08:59 crc kubenswrapper[3021]: W1128 00:08:59.094265 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:59 crc kubenswrapper[3021]: E1128 00:08:59.094373 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:08:59 crc kubenswrapper[3021]: I1128 00:08:59.628234 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:00 crc kubenswrapper[3021]: I1128 00:09:00.628612 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:01 crc kubenswrapper[3021]: I1128 00:09:01.630713 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:02 crc kubenswrapper[3021]: I1128 00:09:02.628675 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:03 crc kubenswrapper[3021]: E1128 00:09:03.122809 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:03 crc kubenswrapper[3021]: I1128 00:09:03.628766 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:04 crc kubenswrapper[3021]: E1128 00:09:04.271881 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:09:04 crc kubenswrapper[3021]: I1128 00:09:04.406213 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:04 crc kubenswrapper[3021]: I1128 00:09:04.407853 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:04 crc kubenswrapper[3021]: I1128 00:09:04.407916 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:04 crc kubenswrapper[3021]: I1128 00:09:04.407933 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:04 crc kubenswrapper[3021]: I1128 00:09:04.408006 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:09:04 crc kubenswrapper[3021]: E1128 00:09:04.409675 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:09:04 crc kubenswrapper[3021]: I1128 00:09:04.628743 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:05 crc kubenswrapper[3021]: I1128 00:09:05.629137 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:06 crc kubenswrapper[3021]: I1128 00:09:06.628352 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:07 crc kubenswrapper[3021]: I1128 00:09:07.542600 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:09:07 crc kubenswrapper[3021]: I1128 00:09:07.542746 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:09:07 crc kubenswrapper[3021]: I1128 00:09:07.628717 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:08 crc kubenswrapper[3021]: W1128 00:09:08.509586 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:08 crc kubenswrapper[3021]: E1128 00:09:08.509755 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:08 crc kubenswrapper[3021]: I1128 00:09:08.629013 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:08 crc kubenswrapper[3021]: E1128 00:09:08.892840 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:09:09 crc kubenswrapper[3021]: I1128 00:09:09.628961 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:10 crc kubenswrapper[3021]: I1128 00:09:10.628768 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:11 crc kubenswrapper[3021]: E1128 00:09:11.274267 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:09:11 crc kubenswrapper[3021]: I1128 00:09:11.410794 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:11 crc kubenswrapper[3021]: I1128 00:09:11.412360 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:11 crc kubenswrapper[3021]: I1128 00:09:11.412441 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:11 crc kubenswrapper[3021]: I1128 00:09:11.412496 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:11 crc kubenswrapper[3021]: I1128 00:09:11.412537 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:09:11 crc kubenswrapper[3021]: E1128 00:09:11.414325 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:09:11 crc kubenswrapper[3021]: I1128 00:09:11.628237 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:12 crc kubenswrapper[3021]: I1128 00:09:12.628181 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:13 crc kubenswrapper[3021]: E1128 00:09:13.125453 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:13 crc kubenswrapper[3021]: I1128 00:09:13.629018 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:14 crc kubenswrapper[3021]: I1128 00:09:14.628914 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:15 crc kubenswrapper[3021]: I1128 00:09:15.628543 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:16 crc kubenswrapper[3021]: I1128 00:09:16.629149 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:16 crc kubenswrapper[3021]: I1128 00:09:16.827097 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:16 crc kubenswrapper[3021]: I1128 00:09:16.828619 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:16 crc kubenswrapper[3021]: I1128 00:09:16.828675 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:16 crc kubenswrapper[3021]: I1128 00:09:16.828691 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.541758 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.541941 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.542062 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.542295 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.543892 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.543941 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.543962 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.547095 3021 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"4d01bd1f42837a1269976254956978b7f23936035e0df49183a29a473485225c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.547719 3021 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://4d01bd1f42837a1269976254956978b7f23936035e0df49183a29a473485225c" gracePeriod=30 Nov 28 00:09:17 crc kubenswrapper[3021]: I1128 00:09:17.628918 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:17 crc kubenswrapper[3021]: W1128 00:09:17.679429 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:17 crc kubenswrapper[3021]: E1128 00:09:17.679563 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.194608 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.195368 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/2.log" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.197321 3021 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="4d01bd1f42837a1269976254956978b7f23936035e0df49183a29a473485225c" exitCode=255 Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.197370 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"4d01bd1f42837a1269976254956978b7f23936035e0df49183a29a473485225c"} Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.197406 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"08d7ef4f75c911c555e45742a02b236d7a594a9866f03ac2250818989e2ec3da"} Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.197432 3021 scope.go:117] "RemoveContainer" containerID="215dc076257a5643b403f8977aca6ca0da008e98eedc3d45b7d2cf9c87c46f4b" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.197617 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.199684 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.199738 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.199758 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:18 crc kubenswrapper[3021]: E1128 00:09:18.276433 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.415616 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.417602 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.417665 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.417680 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.417713 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:09:18 crc kubenswrapper[3021]: E1128 00:09:18.419142 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.627924 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.827133 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.828699 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.828760 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:18 crc kubenswrapper[3021]: I1128 00:09:18.828775 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:18 crc kubenswrapper[3021]: E1128 00:09:18.893101 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:09:19 crc kubenswrapper[3021]: I1128 00:09:19.202849 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Nov 28 00:09:19 crc kubenswrapper[3021]: I1128 00:09:19.482664 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:09:19 crc kubenswrapper[3021]: I1128 00:09:19.482910 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:19 crc kubenswrapper[3021]: I1128 00:09:19.484428 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:19 crc kubenswrapper[3021]: I1128 00:09:19.484602 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:19 crc kubenswrapper[3021]: I1128 00:09:19.484736 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:19 crc kubenswrapper[3021]: I1128 00:09:19.629347 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:20 crc kubenswrapper[3021]: I1128 00:09:20.628925 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:21 crc kubenswrapper[3021]: I1128 00:09:21.629033 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:22 crc kubenswrapper[3021]: I1128 00:09:22.628341 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:23 crc kubenswrapper[3021]: E1128 00:09:23.126962 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:23 crc kubenswrapper[3021]: I1128 00:09:23.628650 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:24 crc kubenswrapper[3021]: I1128 00:09:24.541338 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:09:24 crc kubenswrapper[3021]: I1128 00:09:24.541590 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:24 crc kubenswrapper[3021]: I1128 00:09:24.543330 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:24 crc kubenswrapper[3021]: I1128 00:09:24.543388 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:24 crc kubenswrapper[3021]: I1128 00:09:24.543407 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:24 crc kubenswrapper[3021]: I1128 00:09:24.628371 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:25 crc kubenswrapper[3021]: E1128 00:09:25.279209 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:09:25 crc kubenswrapper[3021]: I1128 00:09:25.419618 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:25 crc kubenswrapper[3021]: I1128 00:09:25.422044 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:25 crc kubenswrapper[3021]: I1128 00:09:25.422141 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:25 crc kubenswrapper[3021]: I1128 00:09:25.422174 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:25 crc kubenswrapper[3021]: I1128 00:09:25.422238 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:09:25 crc kubenswrapper[3021]: E1128 00:09:25.424011 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:09:25 crc kubenswrapper[3021]: I1128 00:09:25.630577 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:26 crc kubenswrapper[3021]: I1128 00:09:26.628921 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:27 crc kubenswrapper[3021]: I1128 00:09:27.541554 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:09:27 crc kubenswrapper[3021]: I1128 00:09:27.541680 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:09:27 crc kubenswrapper[3021]: I1128 00:09:27.628311 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:28 crc kubenswrapper[3021]: I1128 00:09:28.629238 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:28 crc kubenswrapper[3021]: E1128 00:09:28.894434 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:09:29 crc kubenswrapper[3021]: I1128 00:09:29.628959 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:30 crc kubenswrapper[3021]: I1128 00:09:30.628745 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:31 crc kubenswrapper[3021]: I1128 00:09:31.628989 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:32 crc kubenswrapper[3021]: E1128 00:09:32.280890 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:09:32 crc kubenswrapper[3021]: I1128 00:09:32.424572 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:32 crc kubenswrapper[3021]: I1128 00:09:32.426193 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:32 crc kubenswrapper[3021]: I1128 00:09:32.426338 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:32 crc kubenswrapper[3021]: I1128 00:09:32.426437 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:32 crc kubenswrapper[3021]: I1128 00:09:32.426587 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:09:32 crc kubenswrapper[3021]: E1128 00:09:32.428278 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:09:32 crc kubenswrapper[3021]: I1128 00:09:32.628664 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:33 crc kubenswrapper[3021]: W1128 00:09:33.024719 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:33 crc kubenswrapper[3021]: E1128 00:09:33.024847 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:33 crc kubenswrapper[3021]: E1128 00:09:33.129398 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:33 crc kubenswrapper[3021]: E1128 00:09:33.129538 3021 event.go:294] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{crc.187c0303bda48099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,LastTimestamp:2025-11-28 00:07:48.623851673 +0000 UTC m=+0.273543552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:33 crc kubenswrapper[3021]: E1128 00:09:33.131089 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:33 crc kubenswrapper[3021]: I1128 00:09:33.628922 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:34 crc kubenswrapper[3021]: I1128 00:09:34.628755 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:35 crc kubenswrapper[3021]: E1128 00:09:35.426597 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:35 crc kubenswrapper[3021]: I1128 00:09:35.628791 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:36 crc kubenswrapper[3021]: I1128 00:09:36.628920 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:37 crc kubenswrapper[3021]: I1128 00:09:37.543065 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:09:37 crc kubenswrapper[3021]: I1128 00:09:37.543182 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:09:37 crc kubenswrapper[3021]: I1128 00:09:37.629040 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:38 crc kubenswrapper[3021]: I1128 00:09:38.628929 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:38 crc kubenswrapper[3021]: E1128 00:09:38.895083 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:09:39 crc kubenswrapper[3021]: W1128 00:09:39.271942 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:39 crc kubenswrapper[3021]: E1128 00:09:39.272036 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:39 crc kubenswrapper[3021]: E1128 00:09:39.282504 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:09:39 crc kubenswrapper[3021]: I1128 00:09:39.428748 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:39 crc kubenswrapper[3021]: I1128 00:09:39.430503 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:39 crc kubenswrapper[3021]: I1128 00:09:39.430576 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:39 crc kubenswrapper[3021]: I1128 00:09:39.430601 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:39 crc kubenswrapper[3021]: I1128 00:09:39.430651 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:09:39 crc kubenswrapper[3021]: E1128 00:09:39.432098 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:09:39 crc kubenswrapper[3021]: I1128 00:09:39.640407 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:41 crc kubenswrapper[3021]: I1128 00:09:41.205169 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:41 crc kubenswrapper[3021]: I1128 00:09:41.631373 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:42 crc kubenswrapper[3021]: I1128 00:09:42.628757 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:43 crc kubenswrapper[3021]: I1128 00:09:43.628693 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:44 crc kubenswrapper[3021]: I1128 00:09:44.628953 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:44 crc kubenswrapper[3021]: I1128 00:09:44.826689 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:44 crc kubenswrapper[3021]: I1128 00:09:44.827940 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:44 crc kubenswrapper[3021]: I1128 00:09:44.827981 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:44 crc kubenswrapper[3021]: I1128 00:09:44.827991 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:45 crc kubenswrapper[3021]: E1128 00:09:45.427924 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:45 crc kubenswrapper[3021]: I1128 00:09:45.627831 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:46 crc kubenswrapper[3021]: E1128 00:09:46.416823 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:09:46 crc kubenswrapper[3021]: I1128 00:09:46.432661 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:46 crc kubenswrapper[3021]: I1128 00:09:46.434074 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:46 crc kubenswrapper[3021]: I1128 00:09:46.434137 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:46 crc kubenswrapper[3021]: I1128 00:09:46.434151 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:46 crc kubenswrapper[3021]: I1128 00:09:46.434185 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:09:46 crc kubenswrapper[3021]: E1128 00:09:46.435707 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:09:46 crc kubenswrapper[3021]: I1128 00:09:46.628169 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.542016 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.542142 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.542231 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.542452 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.544083 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.544135 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.544154 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.547164 3021 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"08d7ef4f75c911c555e45742a02b236d7a594a9866f03ac2250818989e2ec3da"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.547756 3021 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://08d7ef4f75c911c555e45742a02b236d7a594a9866f03ac2250818989e2ec3da" gracePeriod=30 Nov 28 00:09:47 crc kubenswrapper[3021]: I1128 00:09:47.629924 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.424647 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.425661 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/3.log" Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.427011 3021 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="08d7ef4f75c911c555e45742a02b236d7a594a9866f03ac2250818989e2ec3da" exitCode=255 Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.427066 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"08d7ef4f75c911c555e45742a02b236d7a594a9866f03ac2250818989e2ec3da"} Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.427116 3021 scope.go:117] "RemoveContainer" containerID="4d01bd1f42837a1269976254956978b7f23936035e0df49183a29a473485225c" Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.628165 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.631582 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.631658 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.631728 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.631783 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:09:48 crc kubenswrapper[3021]: I1128 00:09:48.631821 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:09:48 crc kubenswrapper[3021]: E1128 00:09:48.896068 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:09:49 crc kubenswrapper[3021]: I1128 00:09:49.434295 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Nov 28 00:09:49 crc kubenswrapper[3021]: I1128 00:09:49.436389 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5"} Nov 28 00:09:49 crc kubenswrapper[3021]: I1128 00:09:49.436556 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:49 crc kubenswrapper[3021]: I1128 00:09:49.437899 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:49 crc kubenswrapper[3021]: I1128 00:09:49.437948 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:49 crc kubenswrapper[3021]: I1128 00:09:49.437966 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:49 crc kubenswrapper[3021]: I1128 00:09:49.481523 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:09:49 crc kubenswrapper[3021]: I1128 00:09:49.627965 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:50 crc kubenswrapper[3021]: I1128 00:09:50.438736 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:50 crc kubenswrapper[3021]: I1128 00:09:50.439576 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:50 crc kubenswrapper[3021]: I1128 00:09:50.439616 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:50 crc kubenswrapper[3021]: I1128 00:09:50.439631 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:50 crc kubenswrapper[3021]: I1128 00:09:50.627715 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:51 crc kubenswrapper[3021]: I1128 00:09:51.441331 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:51 crc kubenswrapper[3021]: I1128 00:09:51.443085 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:51 crc kubenswrapper[3021]: I1128 00:09:51.443225 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:51 crc kubenswrapper[3021]: I1128 00:09:51.443329 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:51 crc kubenswrapper[3021]: I1128 00:09:51.628445 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:52 crc kubenswrapper[3021]: I1128 00:09:52.628722 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:53 crc kubenswrapper[3021]: E1128 00:09:53.419333 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:09:53 crc kubenswrapper[3021]: I1128 00:09:53.436118 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:53 crc kubenswrapper[3021]: I1128 00:09:53.437611 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:53 crc kubenswrapper[3021]: I1128 00:09:53.437662 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:53 crc kubenswrapper[3021]: I1128 00:09:53.437686 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:53 crc kubenswrapper[3021]: I1128 00:09:53.437747 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:09:53 crc kubenswrapper[3021]: E1128 00:09:53.438968 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:09:53 crc kubenswrapper[3021]: I1128 00:09:53.629323 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:54 crc kubenswrapper[3021]: W1128 00:09:54.359487 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:54 crc kubenswrapper[3021]: E1128 00:09:54.359595 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.541554 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.541760 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.543282 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.543384 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.543407 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.628745 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.826817 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.828123 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.828175 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:09:54 crc kubenswrapper[3021]: I1128 00:09:54.828186 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:09:55 crc kubenswrapper[3021]: E1128 00:09:55.430007 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:09:55 crc kubenswrapper[3021]: I1128 00:09:55.628837 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:56 crc kubenswrapper[3021]: I1128 00:09:56.628000 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:57 crc kubenswrapper[3021]: I1128 00:09:57.542598 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:09:57 crc kubenswrapper[3021]: I1128 00:09:57.542791 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:09:57 crc kubenswrapper[3021]: I1128 00:09:57.629073 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:58 crc kubenswrapper[3021]: I1128 00:09:58.628862 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:58 crc kubenswrapper[3021]: W1128 00:09:58.807277 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:58 crc kubenswrapper[3021]: E1128 00:09:58.807397 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:09:58 crc kubenswrapper[3021]: E1128 00:09:58.896870 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:09:59 crc kubenswrapper[3021]: I1128 00:09:59.627770 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:00 crc kubenswrapper[3021]: E1128 00:10:00.421436 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:10:00 crc kubenswrapper[3021]: I1128 00:10:00.439938 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:00 crc kubenswrapper[3021]: I1128 00:10:00.441785 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:00 crc kubenswrapper[3021]: I1128 00:10:00.441862 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:00 crc kubenswrapper[3021]: I1128 00:10:00.441893 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:00 crc kubenswrapper[3021]: I1128 00:10:00.441948 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:10:00 crc kubenswrapper[3021]: E1128 00:10:00.443628 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:10:00 crc kubenswrapper[3021]: I1128 00:10:00.628736 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:01 crc kubenswrapper[3021]: I1128 00:10:01.629114 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:02 crc kubenswrapper[3021]: I1128 00:10:02.628246 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:03 crc kubenswrapper[3021]: I1128 00:10:03.628285 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:04 crc kubenswrapper[3021]: I1128 00:10:04.628755 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:05 crc kubenswrapper[3021]: E1128 00:10:05.432506 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:10:05 crc kubenswrapper[3021]: I1128 00:10:05.628592 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:06 crc kubenswrapper[3021]: I1128 00:10:06.628052 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:07 crc kubenswrapper[3021]: E1128 00:10:07.423513 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:10:07 crc kubenswrapper[3021]: I1128 00:10:07.444358 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:07 crc kubenswrapper[3021]: I1128 00:10:07.446095 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:07 crc kubenswrapper[3021]: I1128 00:10:07.446165 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:07 crc kubenswrapper[3021]: I1128 00:10:07.446195 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:07 crc kubenswrapper[3021]: I1128 00:10:07.446244 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:10:07 crc kubenswrapper[3021]: E1128 00:10:07.448001 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:10:07 crc kubenswrapper[3021]: I1128 00:10:07.541747 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:10:07 crc kubenswrapper[3021]: I1128 00:10:07.541902 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:10:07 crc kubenswrapper[3021]: I1128 00:10:07.628604 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:08 crc kubenswrapper[3021]: I1128 00:10:08.628607 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:08 crc kubenswrapper[3021]: E1128 00:10:08.898039 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:10:09 crc kubenswrapper[3021]: I1128 00:10:09.628894 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:10 crc kubenswrapper[3021]: I1128 00:10:10.628997 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:11 crc kubenswrapper[3021]: W1128 00:10:11.470097 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:11 crc kubenswrapper[3021]: E1128 00:10:11.470268 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:11 crc kubenswrapper[3021]: I1128 00:10:11.628184 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:14 crc kubenswrapper[3021]: I1128 00:10:14.448307 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:14 crc kubenswrapper[3021]: I1128 00:10:14.449772 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:14 crc kubenswrapper[3021]: I1128 00:10:14.449809 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:14 crc kubenswrapper[3021]: I1128 00:10:14.449823 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:14 crc kubenswrapper[3021]: I1128 00:10:14.449854 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:10:16 crc kubenswrapper[3021]: I1128 00:10:16.934146 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: no such host Nov 28 00:10:16 crc kubenswrapper[3021]: E1128 00:10:16.934238 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: no such host" node="crc" Nov 28 00:10:16 crc kubenswrapper[3021]: E1128 00:10:16.934273 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: no such host" interval="7s" Nov 28 00:10:16 crc kubenswrapper[3021]: E1128 00:10:16.935025 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: no such host" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.541649 3021 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.541807 3021 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.541885 3021 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.542107 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.543450 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.543515 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.543531 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.545262 3021 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.545656 3021 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" containerID="cri-o://e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5" gracePeriod=30 Nov 28 00:10:17 crc kubenswrapper[3021]: W1128 00:10:17.557922 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:17 crc kubenswrapper[3021]: E1128 00:10:17.558033 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:17 crc kubenswrapper[3021]: I1128 00:10:17.628605 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:18 crc kubenswrapper[3021]: I1128 00:10:18.519290 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Nov 28 00:10:18 crc kubenswrapper[3021]: I1128 00:10:18.519976 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/4.log" Nov 28 00:10:18 crc kubenswrapper[3021]: I1128 00:10:18.521399 3021 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5" exitCode=255 Nov 28 00:10:18 crc kubenswrapper[3021]: I1128 00:10:18.521442 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5"} Nov 28 00:10:18 crc kubenswrapper[3021]: I1128 00:10:18.521503 3021 scope.go:117] "RemoveContainer" containerID="08d7ef4f75c911c555e45742a02b236d7a594a9866f03ac2250818989e2ec3da" Nov 28 00:10:18 crc kubenswrapper[3021]: I1128 00:10:18.628939 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:18 crc kubenswrapper[3021]: E1128 00:10:18.899146 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:10:18 crc kubenswrapper[3021]: E1128 00:10:18.928413 3021 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Nov 28 00:10:19 crc kubenswrapper[3021]: I1128 00:10:19.526631 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Nov 28 00:10:19 crc kubenswrapper[3021]: I1128 00:10:19.528264 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:19 crc kubenswrapper[3021]: I1128 00:10:19.529560 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:19 crc kubenswrapper[3021]: I1128 00:10:19.529623 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:19 crc kubenswrapper[3021]: I1128 00:10:19.529646 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:19 crc kubenswrapper[3021]: I1128 00:10:19.532158 3021 scope.go:117] "RemoveContainer" containerID="e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5" Nov 28 00:10:19 crc kubenswrapper[3021]: E1128 00:10:19.533423 3021 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Nov 28 00:10:19 crc kubenswrapper[3021]: I1128 00:10:19.630042 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:20 crc kubenswrapper[3021]: I1128 00:10:20.628732 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:21 crc kubenswrapper[3021]: I1128 00:10:21.628388 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:22 crc kubenswrapper[3021]: I1128 00:10:22.628351 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.628894 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.712994 3021 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.713261 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.715587 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.715640 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.715662 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.718235 3021 scope.go:117] "RemoveContainer" containerID="e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5" Nov 28 00:10:23 crc kubenswrapper[3021]: E1128 00:10:23.719440 3021 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.935041 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.936520 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.936558 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.936571 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:23 crc kubenswrapper[3021]: I1128 00:10:23.936602 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:10:23 crc kubenswrapper[3021]: E1128 00:10:23.936605 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:10:23 crc kubenswrapper[3021]: E1128 00:10:23.937859 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:10:24 crc kubenswrapper[3021]: I1128 00:10:24.628903 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:25 crc kubenswrapper[3021]: I1128 00:10:25.628858 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:26 crc kubenswrapper[3021]: I1128 00:10:26.628890 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:26 crc kubenswrapper[3021]: E1128 00:10:26.937075 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:10:27 crc kubenswrapper[3021]: I1128 00:10:27.628414 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:28 crc kubenswrapper[3021]: I1128 00:10:28.628714 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:28 crc kubenswrapper[3021]: E1128 00:10:28.899279 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:10:29 crc kubenswrapper[3021]: I1128 00:10:29.629256 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:30 crc kubenswrapper[3021]: I1128 00:10:30.629315 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:30 crc kubenswrapper[3021]: I1128 00:10:30.938363 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:30 crc kubenswrapper[3021]: E1128 00:10:30.938681 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:10:30 crc kubenswrapper[3021]: I1128 00:10:30.940439 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:30 crc kubenswrapper[3021]: I1128 00:10:30.940553 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:30 crc kubenswrapper[3021]: I1128 00:10:30.940577 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:30 crc kubenswrapper[3021]: I1128 00:10:30.940628 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:10:30 crc kubenswrapper[3021]: E1128 00:10:30.942315 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:10:31 crc kubenswrapper[3021]: I1128 00:10:31.628981 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:32 crc kubenswrapper[3021]: I1128 00:10:32.628399 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:33 crc kubenswrapper[3021]: I1128 00:10:33.628451 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:34 crc kubenswrapper[3021]: I1128 00:10:34.629699 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:34 crc kubenswrapper[3021]: I1128 00:10:34.827409 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:34 crc kubenswrapper[3021]: I1128 00:10:34.829952 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:34 crc kubenswrapper[3021]: I1128 00:10:34.829997 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:34 crc kubenswrapper[3021]: I1128 00:10:34.830017 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:35 crc kubenswrapper[3021]: I1128 00:10:35.627827 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable Nov 28 00:10:36 crc kubenswrapper[3021]: I1128 00:10:36.627959 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable Nov 28 00:10:36 crc kubenswrapper[3021]: E1128 00:10:36.938779 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.47.54:53: dial udp 199.204.47.54:53: connect: network is unreachable" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.827554 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.830355 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.830395 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.830408 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.833590 3021 scope.go:117] "RemoveContainer" containerID="e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5" Nov 28 00:10:37 crc kubenswrapper[3021]: E1128 00:10:37.835484 3021 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.942723 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.944329 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.944404 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.944428 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:37 crc kubenswrapper[3021]: I1128 00:10:37.944507 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:10:38 crc kubenswrapper[3021]: E1128 00:10:38.900394 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:10:45 crc kubenswrapper[3021]: I1128 00:10:45.826943 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:45 crc kubenswrapper[3021]: I1128 00:10:45.828898 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:45 crc kubenswrapper[3021]: I1128 00:10:45.828962 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:45 crc kubenswrapper[3021]: I1128 00:10:45.828981 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:47 crc kubenswrapper[3021]: E1128 00:10:47.940268 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Nov 28 00:10:48 crc kubenswrapper[3021]: I1128 00:10:48.632674 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:10:48 crc kubenswrapper[3021]: I1128 00:10:48.632753 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:10:48 crc kubenswrapper[3021]: I1128 00:10:48.632790 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:10:48 crc kubenswrapper[3021]: I1128 00:10:48.632819 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:10:48 crc kubenswrapper[3021]: I1128 00:10:48.632870 3021 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:10:48 crc kubenswrapper[3021]: E1128 00:10:48.901031 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:10:49 crc kubenswrapper[3021]: I1128 00:10:49.827425 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:49 crc kubenswrapper[3021]: I1128 00:10:49.829445 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:49 crc kubenswrapper[3021]: I1128 00:10:49.829540 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:49 crc kubenswrapper[3021]: I1128 00:10:49.829599 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:49 crc kubenswrapper[3021]: I1128 00:10:49.832256 3021 scope.go:117] "RemoveContainer" containerID="e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5" Nov 28 00:10:49 crc kubenswrapper[3021]: E1128 00:10:49.833639 3021 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-crc_openshift-kube-controller-manager(bd6a3a59e513625ca0ae3724df2686bc)\"" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" Nov 28 00:10:50 crc kubenswrapper[3021]: I1128 00:10:50.827306 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:10:50 crc kubenswrapper[3021]: I1128 00:10:50.828681 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:10:50 crc kubenswrapper[3021]: I1128 00:10:50.828729 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:10:50 crc kubenswrapper[3021]: I1128 00:10:50.828745 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:10:57 crc kubenswrapper[3021]: I1128 00:10:57.116324 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: E1128 00:10:57.116500 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout" interval="7s" Nov 28 00:10:57 crc kubenswrapper[3021]: E1128 00:10:57.116521 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout" node="crc" Nov 28 00:10:57 crc kubenswrapper[3021]: W1128 00:10:57.116537 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: W1128 00:10:57.116263 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: W1128 00:10:57.116637 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: E1128 00:10:57.116694 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: E1128 00:10:57.116722 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: W1128 00:10:57.116607 3021 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: E1128 00:10:57.116765 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: I1128 00:10:57.116790 3021 trace.go:236] Trace[1546480431]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (28-Nov-2025 00:10:41.804) (total time: 15311ms): Nov 28 00:10:57 crc kubenswrapper[3021]: Trace[1546480431]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout 15311ms (00:10:57.116) Nov 28 00:10:57 crc kubenswrapper[3021]: Trace[1546480431]: [15.311917104s] [15.311917104s] END Nov 28 00:10:57 crc kubenswrapper[3021]: E1128 00:10:57.116832 3021 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout Nov 28 00:10:57 crc kubenswrapper[3021]: E1128 00:10:57.117728 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on [::1]:53: read udp [::1]:48891->[::1]:53: i/o timeout" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:10:57 crc kubenswrapper[3021]: I1128 00:10:57.628229 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:58 crc kubenswrapper[3021]: I1128 00:10:58.629038 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:10:58 crc kubenswrapper[3021]: E1128 00:10:58.901868 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:10:59 crc kubenswrapper[3021]: I1128 00:10:59.628818 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:00 crc kubenswrapper[3021]: I1128 00:11:00.628291 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:01 crc kubenswrapper[3021]: I1128 00:11:01.627945 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:02 crc kubenswrapper[3021]: I1128 00:11:02.628620 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:02 crc kubenswrapper[3021]: I1128 00:11:02.827388 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:11:02 crc kubenswrapper[3021]: I1128 00:11:02.832483 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:11:02 crc kubenswrapper[3021]: I1128 00:11:02.833933 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:11:02 crc kubenswrapper[3021]: I1128 00:11:02.833968 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:11:03 crc kubenswrapper[3021]: I1128 00:11:03.629029 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.117685 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:11:04 crc kubenswrapper[3021]: E1128 00:11:04.119886 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.120439 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.120529 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.120550 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.120589 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:11:04 crc kubenswrapper[3021]: E1128 00:11:04.122788 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.628980 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.827274 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.828564 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.828692 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.828780 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:11:04 crc kubenswrapper[3021]: I1128 00:11:04.830396 3021 scope.go:117] "RemoveContainer" containerID="e0b35e5c71096b9f72f3a1aaae37d3d55ccb96971796d5d3adff81f08fc4d3e5" Nov 28 00:11:05 crc kubenswrapper[3021]: I1128 00:11:05.628011 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:05 crc kubenswrapper[3021]: I1128 00:11:05.665322 3021 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/cluster-policy-controller/5.log" Nov 28 00:11:05 crc kubenswrapper[3021]: I1128 00:11:05.666831 3021 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"b3fffb5d6460f91982a205bae20c0478ca8f7bf9ed5802e0178578358b8aded1"} Nov 28 00:11:05 crc kubenswrapper[3021]: I1128 00:11:05.666991 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:11:05 crc kubenswrapper[3021]: I1128 00:11:05.668499 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:11:05 crc kubenswrapper[3021]: I1128 00:11:05.668542 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:11:05 crc kubenswrapper[3021]: I1128 00:11:05.668560 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:11:06 crc kubenswrapper[3021]: I1128 00:11:06.629041 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:07 crc kubenswrapper[3021]: E1128 00:11:07.120388 3021 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" event="&Event{ObjectMeta:{crc.187c0303c3413d88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,LastTimestamp:2025-11-28 00:07:48.718009736 +0000 UTC m=+0.367701604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:11:07 crc kubenswrapper[3021]: I1128 00:11:07.628357 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:08 crc kubenswrapper[3021]: I1128 00:11:08.628843 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:08 crc kubenswrapper[3021]: E1128 00:11:08.902047 3021 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:11:09 crc kubenswrapper[3021]: I1128 00:11:09.482673 3021 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:11:09 crc kubenswrapper[3021]: I1128 00:11:09.482964 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:11:09 crc kubenswrapper[3021]: I1128 00:11:09.485846 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:11:09 crc kubenswrapper[3021]: I1128 00:11:09.485921 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:11:09 crc kubenswrapper[3021]: I1128 00:11:09.485942 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:11:09 crc kubenswrapper[3021]: I1128 00:11:09.628581 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:10 crc kubenswrapper[3021]: I1128 00:11:10.628741 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:11 crc kubenswrapper[3021]: E1128 00:11:11.122800 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" interval="7s" Nov 28 00:11:11 crc kubenswrapper[3021]: I1128 00:11:11.122901 3021 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:11:11 crc kubenswrapper[3021]: I1128 00:11:11.124753 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:11:11 crc kubenswrapper[3021]: I1128 00:11:11.124815 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:11:11 crc kubenswrapper[3021]: I1128 00:11:11.124836 3021 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:11:11 crc kubenswrapper[3021]: I1128 00:11:11.124877 3021 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:11:11 crc kubenswrapper[3021]: E1128 00:11:11.126641 3021 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host" node="crc" Nov 28 00:11:11 crc kubenswrapper[3021]: I1128 00:11:11.629134 3021 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp: lookup api-int.crc.testing on 199.204.44.24:53: no such host Nov 28 00:11:12 crc kubenswrapper[3021]: I1128 00:11:12.300844 3021 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Nov 28 00:11:12 crc systemd[1]: Stopping Kubernetes Kubelet... Nov 28 00:11:12 crc kubenswrapper[3021]: I1128 00:11:12.792895 3021 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 00:11:12 crc systemd[1]: kubelet.service: Deactivated successfully. Nov 28 00:11:12 crc systemd[1]: Stopped Kubernetes Kubelet. Nov 28 00:11:12 crc systemd[1]: kubelet.service: Consumed 11.847s CPU time. -- Boot f64a486e95e64cde92396c5687a0a002 -- Nov 28 00:12:18 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 00:12:18 crc kubenswrapper[3556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 00:12:18 crc kubenswrapper[3556]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 00:12:18 crc kubenswrapper[3556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 00:12:18 crc kubenswrapper[3556]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 00:12:18 crc kubenswrapper[3556]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 00:12:18 crc kubenswrapper[3556]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.535058 3556 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536689 3556 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536702 3556 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536708 3556 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536715 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536721 3556 feature_gate.go:227] unrecognized feature gate: Example Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536726 3556 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536731 3556 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536736 3556 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536741 3556 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536748 3556 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536753 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536758 3556 feature_gate.go:227] unrecognized feature gate: MetricsServer Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536763 3556 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536768 3556 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536773 3556 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536778 3556 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536783 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536788 3556 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536793 3556 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536799 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536805 3556 feature_gate.go:227] unrecognized feature gate: SignatureStores Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536814 3556 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536820 3556 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536825 3556 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536831 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536836 3556 feature_gate.go:227] unrecognized feature gate: ImagePolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536842 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536849 3556 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536855 3556 feature_gate.go:227] unrecognized feature gate: PinnedImages Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536861 3556 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536867 3556 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536873 3556 feature_gate.go:227] unrecognized feature gate: PlatformOperators Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536878 3556 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536885 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536890 3556 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536896 3556 feature_gate.go:227] unrecognized feature gate: GatewayAPI Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536901 3556 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536906 3556 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536912 3556 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536917 3556 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536923 3556 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536928 3556 feature_gate.go:227] unrecognized feature gate: NewOLM Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536934 3556 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536939 3556 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536945 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536950 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536956 3556 feature_gate.go:227] unrecognized feature gate: InsightsConfig Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536961 3556 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536967 3556 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536972 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536977 3556 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536983 3556 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536988 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536993 3556 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.536999 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537021 3556 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537027 3556 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537033 3556 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537039 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537045 3556 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537114 3556 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537129 3556 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537138 3556 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537144 3556 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537152 3556 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537156 3556 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537163 3556 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537169 3556 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537173 3556 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537178 3556 flags.go:64] FLAG: --azure-container-registry-config="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537182 3556 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537187 3556 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537191 3556 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537196 3556 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537200 3556 flags.go:64] FLAG: --cgroup-root="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537204 3556 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537208 3556 flags.go:64] FLAG: --client-ca-file="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537213 3556 flags.go:64] FLAG: --cloud-config="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537217 3556 flags.go:64] FLAG: --cloud-provider="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537221 3556 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537227 3556 flags.go:64] FLAG: --cluster-domain="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537231 3556 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537235 3556 flags.go:64] FLAG: --config-dir="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537239 3556 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537244 3556 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537249 3556 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537254 3556 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537258 3556 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537263 3556 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537267 3556 flags.go:64] FLAG: --contention-profiling="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537271 3556 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537276 3556 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537280 3556 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537286 3556 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537293 3556 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537298 3556 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537303 3556 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537307 3556 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537312 3556 flags.go:64] FLAG: --enable-server="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537316 3556 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537324 3556 flags.go:64] FLAG: --event-burst="100" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537329 3556 flags.go:64] FLAG: --event-qps="50" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537334 3556 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537339 3556 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537344 3556 flags.go:64] FLAG: --eviction-hard="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537349 3556 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537353 3556 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537357 3556 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537365 3556 flags.go:64] FLAG: --eviction-soft="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537369 3556 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537373 3556 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537377 3556 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537381 3556 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537386 3556 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537390 3556 flags.go:64] FLAG: --feature-gates="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537395 3556 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537400 3556 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537404 3556 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537408 3556 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537413 3556 flags.go:64] FLAG: --healthz-port="10248" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537417 3556 flags.go:64] FLAG: --help="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537421 3556 flags.go:64] FLAG: --hostname-override="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537425 3556 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537429 3556 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537433 3556 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537438 3556 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537442 3556 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537446 3556 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537450 3556 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537454 3556 flags.go:64] FLAG: --iptables-drop-bit="15" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537458 3556 flags.go:64] FLAG: --iptables-masquerade-bit="14" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537463 3556 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537468 3556 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537472 3556 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537477 3556 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537481 3556 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537485 3556 flags.go:64] FLAG: --kube-reserved="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537490 3556 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537493 3556 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537498 3556 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537504 3556 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537509 3556 flags.go:64] FLAG: --lock-file="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537514 3556 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537518 3556 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537523 3556 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537529 3556 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537533 3556 flags.go:64] FLAG: --logging-format="text" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537538 3556 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537542 3556 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537546 3556 flags.go:64] FLAG: --manifest-url="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537551 3556 flags.go:64] FLAG: --manifest-url-header="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537556 3556 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537561 3556 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537566 3556 flags.go:64] FLAG: --max-pods="110" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537571 3556 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537575 3556 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537579 3556 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537586 3556 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537606 3556 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537610 3556 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537615 3556 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537626 3556 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537630 3556 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537634 3556 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537639 3556 flags.go:64] FLAG: --pod-cidr="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537643 3556 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0319702e115e7248d135e58342ccf3f458e19c39e86dc8e79036f578ce80a4" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537651 3556 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537656 3556 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537660 3556 flags.go:64] FLAG: --pods-per-core="0" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537664 3556 flags.go:64] FLAG: --port="10250" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537668 3556 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537673 3556 flags.go:64] FLAG: --provider-id="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537679 3556 flags.go:64] FLAG: --qos-reserved="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537683 3556 flags.go:64] FLAG: --read-only-port="10255" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537687 3556 flags.go:64] FLAG: --register-node="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537692 3556 flags.go:64] FLAG: --register-schedulable="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537696 3556 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537705 3556 flags.go:64] FLAG: --registry-burst="10" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537709 3556 flags.go:64] FLAG: --registry-qps="5" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537713 3556 flags.go:64] FLAG: --reserved-cpus="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537717 3556 flags.go:64] FLAG: --reserved-memory="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537722 3556 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537726 3556 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537731 3556 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537735 3556 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537739 3556 flags.go:64] FLAG: --runonce="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537743 3556 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537748 3556 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537752 3556 flags.go:64] FLAG: --seccomp-default="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537756 3556 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537762 3556 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537767 3556 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537771 3556 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537775 3556 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537779 3556 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537784 3556 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537788 3556 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537792 3556 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537796 3556 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537800 3556 flags.go:64] FLAG: --system-cgroups="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537804 3556 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537817 3556 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537822 3556 flags.go:64] FLAG: --tls-cert-file="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537826 3556 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537834 3556 flags.go:64] FLAG: --tls-min-version="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537838 3556 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537842 3556 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537846 3556 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537851 3556 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537855 3556 flags.go:64] FLAG: --v="2" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537861 3556 flags.go:64] FLAG: --version="false" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537868 3556 flags.go:64] FLAG: --vmodule="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537874 3556 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.537880 3556 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537942 3556 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537948 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537954 3556 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537959 3556 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537964 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537970 3556 feature_gate.go:227] unrecognized feature gate: SignatureStores Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537975 3556 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537980 3556 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537987 3556 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537993 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.537998 3556 feature_gate.go:227] unrecognized feature gate: ImagePolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538003 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538007 3556 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538028 3556 feature_gate.go:227] unrecognized feature gate: PinnedImages Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538033 3556 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538038 3556 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538043 3556 feature_gate.go:227] unrecognized feature gate: PlatformOperators Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538048 3556 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538053 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538058 3556 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538063 3556 feature_gate.go:227] unrecognized feature gate: GatewayAPI Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538069 3556 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538076 3556 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538081 3556 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538087 3556 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538092 3556 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538097 3556 feature_gate.go:227] unrecognized feature gate: NewOLM Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538102 3556 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538107 3556 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538112 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538117 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538122 3556 feature_gate.go:227] unrecognized feature gate: InsightsConfig Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538127 3556 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538132 3556 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538137 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538142 3556 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538147 3556 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538152 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538157 3556 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538162 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538169 3556 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538174 3556 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538179 3556 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538184 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538190 3556 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538196 3556 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538203 3556 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538209 3556 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538213 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538218 3556 feature_gate.go:227] unrecognized feature gate: Example Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538224 3556 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538229 3556 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538234 3556 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538239 3556 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538253 3556 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538259 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538265 3556 feature_gate.go:227] unrecognized feature gate: MetricsServer Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538271 3556 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538277 3556 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.538284 3556 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.538290 3556 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.549485 3556 server.go:487] "Kubelet version" kubeletVersion="v1.29.5+29c95f3" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.549536 3556 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549610 3556 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549627 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549640 3556 feature_gate.go:227] unrecognized feature gate: ImagePolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549651 3556 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549663 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549674 3556 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549685 3556 feature_gate.go:227] unrecognized feature gate: PinnedImages Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549696 3556 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549707 3556 feature_gate.go:227] unrecognized feature gate: PlatformOperators Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549718 3556 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549729 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549740 3556 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549751 3556 feature_gate.go:227] unrecognized feature gate: GatewayAPI Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549762 3556 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549773 3556 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549784 3556 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549795 3556 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549806 3556 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549817 3556 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549828 3556 feature_gate.go:227] unrecognized feature gate: NewOLM Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549839 3556 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549850 3556 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549861 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549874 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549886 3556 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549897 3556 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549907 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549918 3556 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549930 3556 feature_gate.go:227] unrecognized feature gate: InsightsConfig Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549940 3556 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549952 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549964 3556 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549975 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549986 3556 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.549997 3556 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550008 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550047 3556 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550058 3556 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550072 3556 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550084 3556 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550095 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550106 3556 feature_gate.go:227] unrecognized feature gate: Example Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550117 3556 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550128 3556 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550139 3556 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550150 3556 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550161 3556 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550171 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550182 3556 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550193 3556 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550204 3556 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550215 3556 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550225 3556 feature_gate.go:227] unrecognized feature gate: MetricsServer Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550236 3556 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550248 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550258 3556 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550269 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550279 3556 feature_gate.go:227] unrecognized feature gate: SignatureStores Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550292 3556 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550304 3556 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.550318 3556 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550473 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550487 3556 feature_gate.go:227] unrecognized feature gate: DNSNameResolver Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550499 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderExternal Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550511 3556 feature_gate.go:227] unrecognized feature gate: InstallAlternateInfrastructureAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550521 3556 feature_gate.go:240] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550533 3556 feature_gate.go:227] unrecognized feature gate: MixedCPUsAllocation Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550546 3556 feature_gate.go:227] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550558 3556 feature_gate.go:227] unrecognized feature gate: BuildCSIVolumes Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550568 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallOpenStack Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550579 3556 feature_gate.go:227] unrecognized feature gate: Example Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550590 3556 feature_gate.go:227] unrecognized feature gate: GCPClusterHostedDNS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550601 3556 feature_gate.go:227] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550613 3556 feature_gate.go:227] unrecognized feature gate: CSIDriverSharedResource Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550627 3556 feature_gate.go:227] unrecognized feature gate: ChunkSizeMiB Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550641 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallNutanix Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550655 3556 feature_gate.go:227] unrecognized feature gate: EtcdBackendQuota Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550670 3556 feature_gate.go:227] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550684 3556 feature_gate.go:227] unrecognized feature gate: AzureWorkloadIdentity Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550698 3556 feature_gate.go:227] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550709 3556 feature_gate.go:227] unrecognized feature gate: MetricsCollectionProfiles Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550720 3556 feature_gate.go:227] unrecognized feature gate: MetricsServer Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550733 3556 feature_gate.go:227] unrecognized feature gate: AutomatedEtcdBackup Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550747 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallVSphere Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550763 3556 feature_gate.go:227] unrecognized feature gate: VSphereDriverConfiguration Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550777 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAWS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550791 3556 feature_gate.go:227] unrecognized feature gate: SignatureStores Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550805 3556 feature_gate.go:227] unrecognized feature gate: UpgradeStatus Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550819 3556 feature_gate.go:227] unrecognized feature gate: VolumeGroupSnapshot Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550833 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallGCP Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550844 3556 feature_gate.go:227] unrecognized feature gate: ImagePolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550856 3556 feature_gate.go:227] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550871 3556 feature_gate.go:227] unrecognized feature gate: NetworkLiveMigration Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550884 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderGCP Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550898 3556 feature_gate.go:227] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550916 3556 feature_gate.go:227] unrecognized feature gate: PinnedImages Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550932 3556 feature_gate.go:227] unrecognized feature gate: PlatformOperators Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550947 3556 feature_gate.go:227] unrecognized feature gate: AdminNetworkPolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550960 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProviderAzure Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550971 3556 feature_gate.go:227] unrecognized feature gate: ExternalRouteCertificate Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.550986 3556 feature_gate.go:227] unrecognized feature gate: MachineConfigNodes Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551000 3556 feature_gate.go:227] unrecognized feature gate: GatewayAPI Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551048 3556 feature_gate.go:227] unrecognized feature gate: InsightsConfigAPI Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551064 3556 feature_gate.go:227] unrecognized feature gate: ManagedBootImages Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551078 3556 feature_gate.go:227] unrecognized feature gate: NodeDisruptionPolicy Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551092 3556 feature_gate.go:227] unrecognized feature gate: ExternalOIDC Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551103 3556 feature_gate.go:227] unrecognized feature gate: GCPLabelsTags Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551114 3556 feature_gate.go:227] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551125 3556 feature_gate.go:227] unrecognized feature gate: NewOLM Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551135 3556 feature_gate.go:227] unrecognized feature gate: SigstoreImageVerification Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551146 3556 feature_gate.go:227] unrecognized feature gate: VSphereMultiVCenters Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551157 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstall Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551168 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallAzure Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551180 3556 feature_gate.go:227] unrecognized feature gate: AlibabaPlatform Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551191 3556 feature_gate.go:227] unrecognized feature gate: ExternalCloudProvider Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551202 3556 feature_gate.go:227] unrecognized feature gate: HardwareSpeed Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551213 3556 feature_gate.go:227] unrecognized feature gate: InsightsConfig Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551224 3556 feature_gate.go:227] unrecognized feature gate: OnClusterBuild Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551234 3556 feature_gate.go:227] unrecognized feature gate: BareMetalLoadBalancer Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551245 3556 feature_gate.go:227] unrecognized feature gate: ClusterAPIInstallPowerVS Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.551256 3556 feature_gate.go:227] unrecognized feature gate: VSphereStaticIPs Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.551268 3556 feature_gate.go:250] feature gates: &{map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false ServiceAccountTokenNodeBindingValidation:false ServiceAccountTokenPodNodeInfo:false TranslateStreamCloseWebsocketRequests:false ValidatingAdmissionPolicy:false]} Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.551858 3556 server.go:925] "Client rotation is on, will bootstrap in background" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.557246 3556 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.558330 3556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.558796 3556 server.go:982] "Starting client certificate rotation" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.558829 3556 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.559443 3556 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-28 10:25:11.176519922 +0000 UTC Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.559595 3556 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 3634h12m52.61693001s for next certificate rotation Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.565281 3556 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.568884 3556 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.569979 3556 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.588574 3556 remote_runtime.go:143] "Validated CRI v1 runtime API" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.588695 3556 util_unix.go:103] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/var/run/crio/crio.sock" URL="unix:///var/run/crio/crio.sock" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.619039 3556 remote_image.go:111] "Validated CRI v1 image API" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.624910 3556 fs.go:132] Filesystem UUIDs: map[2025-11-28-00-07-21-00:/dev/sr0 68d6f3e9-64e9-44a4-a1d0-311f9c629a01:/dev/vda4 6ea7ef63-bc43-49c4-9337-b3b14ffb2763:/dev/vda3 7B77-95E7:/dev/vda2] Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.624990 3556 fs.go:133] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.653074 3556 manager.go:217] Machine: {Timestamp:2025-11-28 00:12:18.651355781 +0000 UTC m=+0.243587811 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:c1bd596843fb445da20eca66471ddf66 SystemUUID:b43e451d-7b03-476c-9a13-16cc174618c5 BootID:f64a486e-95e6-4cde-9239-6c5687a0a002 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85294297088 Type:vfs Inodes:41680320 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:46:ba:73 Speed:0 Mtu:1500} {Name:br-int MacAddress:4e:ec:11:72:80:3b Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:46:ba:73 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2f:a4:c1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:89:b9:aa Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3a:49:2f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:dc:ec:02 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1e:d2:dc:6a:38:e8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:b6:dc:d9:26:03:d4 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:74:d2:6a:f0:e3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.653346 3556 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.653499 3556 manager.go:233] Version: {KernelVersion:5.14.0-427.22.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 416.94.202406172220-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.654906 3556 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.655165 3556 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.655419 3556 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.655443 3556 container_manager_linux.go:304] "Creating device plugin manager" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.655627 3556 manager.go:136] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.655863 3556 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.656467 3556 state_mem.go:36] "Initialized new in-memory state store" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.656581 3556 server.go:1227] "Using root directory" path="/var/lib/kubelet" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.657346 3556 kubelet.go:406] "Attempting to sync node with API server" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.657376 3556 kubelet.go:311] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.657405 3556 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.657422 3556 kubelet.go:322] "Adding apiserver pod source" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.657619 3556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.659428 3556 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="cri-o" version="1.29.5-5.rhaos4.16.git7032128.el9" apiVersion="v1" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.660373 3556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661033 3556 kubelet.go:826] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661512 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/azure-file" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661545 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661555 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/rbd" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661570 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661580 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661594 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661604 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661614 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661627 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661635 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/cephfs" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661648 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661656 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661666 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661679 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661687 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.661932 3556 plugins.go:642] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.662407 3556 server.go:1262] "Started kubelet" Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.663089 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:18 crc kubenswrapper[3556]: E1128 00:12:18.663174 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:18 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.663237 3556 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.663422 3556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.663557 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:18 crc kubenswrapper[3556]: E1128 00:12:18.663674 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.664357 3556 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.664722 3556 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:18 crc kubenswrapper[3556]: E1128 00:12:18.664940 3556 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c03429d315212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:12:18.662371858 +0000 UTC m=+0.254603848,LastTimestamp:2025-11-28 00:12:18.662371858 +0000 UTC m=+0.254603848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.672276 3556 server.go:461] "Adding debug handlers to kubelet server" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.682408 3556 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.682448 3556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.682874 3556 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-06-27 13:05:20 +0000 UTC, rotation deadline is 2026-04-22 01:47:38.376754659 +0000 UTC Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.682979 3556 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 3481h35m19.693782235s for next certificate rotation Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.683903 3556 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.686694 3556 factory.go:153] Registering CRI-O factory Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.686737 3556 factory.go:221] Registration of the crio container factory successfully Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.687107 3556 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.687122 3556 factory.go:55] Registering systemd factory Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.687136 3556 factory.go:221] Registration of the systemd container factory successfully Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.687169 3556 factory.go:103] Registering Raw factory Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.687187 3556 manager.go:1196] Started watching for new ooms in manager Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.687872 3556 volume_manager.go:289] "The desired_state_of_world populator starts" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.688030 3556 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.688041 3556 manager.go:319] Starting recovery of all containers Nov 28 00:12:18 crc kubenswrapper[3556]: E1128 00:12:18.688293 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="200ms" Nov 28 00:12:18 crc kubenswrapper[3556]: W1128 00:12:18.688829 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:18 crc kubenswrapper[3556]: E1128 00:12:18.689069 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.714383 3556 manager.go:324] Recovery completed Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.729970 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.739725 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.739810 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.739859 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.741245 3556 cpu_manager.go:215] "Starting CPU manager" policy="none" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.741269 3556 cpu_manager.go:216] "Reconciling" reconcilePeriod="10s" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.741291 3556 state_mem.go:36] "Initialized new in-memory state store" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753220 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753286 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753307 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753326 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753343 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d0f40333-c860-4c04-8058-a0bf572dcf12" volumeName="kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753360 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753373 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753387 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753409 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753430 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753445 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753458 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753480 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753498 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753511 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753531 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753544 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753557 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753576 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753588 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753606 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753618 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753636 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753652 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753672 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753688 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753706 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753722 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753738 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753755 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753773 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753789 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753805 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753821 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753838 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753852 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753874 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753890 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753904 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753924 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753939 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753960 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6268b7fe-8910-4505-b404-6f1df638105c" volumeName="kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753976 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.753995 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754031 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5d722a-1123-4935-9740-52a08d018bc9" volumeName="kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754070 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754085 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754100 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754158 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754186 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754211 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754233 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754257 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754275 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf1a8966-f594-490a-9fbb-eec5bafd13d3" volumeName="kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754312 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754330 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a48baf-1bee-4921-8bb2-9b7320e76f79" volumeName="kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754351 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754365 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754387 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754400 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754413 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754432 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754446 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754467 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754486 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754513 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754531 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754551 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754565 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754585 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754604 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754622 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754642 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754657 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754677 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754692 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.754715 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.759863 3556 reconstruct_new.go:149] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.759940 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.759958 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.759977 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.759995 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760029 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760049 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760066 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760083 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="59748b9b-c309-4712-aa85-bb38d71c4915" volumeName="kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760098 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760116 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760140 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760154 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760168 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760183 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760196 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760212 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760226 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760247 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760264 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760322 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760345 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4a7de23-6134-4044-902a-0900dc04a501" volumeName="kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760365 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760389 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760409 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760428 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760443 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760461 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760477 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760492 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760531 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760548 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="12e733dd-0939-4f1b-9cbb-13897e093787" volumeName="kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760561 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760575 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a23c0ee-5648-448c-b772-83dced2891ce" volumeName="kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760590 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760604 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" volumeName="kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760617 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c782cf62-a827-4677-b3c2-6f82c5f09cbb" volumeName="kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760629 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760643 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" volumeName="kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760655 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760671 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760686 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="51a02bbf-2d40-4f84-868a-d399ea18a846" volumeName="kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760701 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760713 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760728 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="297ab9b6-2186-4d5b-a952-2bfd59af63c4" volumeName="kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760740 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760758 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760772 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760786 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760799 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760811 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760825 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb762d1-812f-43f1-9eac-68034c1ecec7" volumeName="kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760839 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" volumeName="kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760852 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760865 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4092a9f8-5acc-4932-9e90-ef962eeb301a" volumeName="kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760877 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="475321a1-8b7e-4033-8f72-b05a8b377347" volumeName="kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760890 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760903 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760916 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" volumeName="kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760928 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760943 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b5c38ff-1fa8-4219-994d-15776acd4a4d" volumeName="kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760956 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760968 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760982 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.760996 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761024 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f4dca86-e6ee-4ec9-8324-86aff960225e" volumeName="kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761037 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="41e8708a-e40d-4d28-846b-c52eda4d1755" volumeName="kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761049 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="71af81a9-7d43-49b2-9287-c375900aa905" volumeName="kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761062 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761075 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761088 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc291782-27d2-4a74-af79-c7dcb31535d2" volumeName="kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761100 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761113 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761126 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761142 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d0dcce3-d96e-48cb-9b9f-362105911589" volumeName="kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761156 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761168 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761181 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761194 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761206 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761220 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1a8b70-3856-486f-9912-a2de1d57c3fb" volumeName="kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761233 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761246 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761260 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761272 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="410cf605-1970-4691-9c95-53fdc123b1f3" volumeName="kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761285 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761298 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d67253e-2acd-4bc1-8185-793587da4f17" volumeName="kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761311 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761323 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" volumeName="kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761336 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" volumeName="kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761351 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f394926-bdb9-425c-b36e-264d7fd34550" volumeName="kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761364 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a3e81c3-c292-4130-9436-f94062c91efd" volumeName="kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761384 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761400 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761414 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761427 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" volumeName="kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761441 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" volumeName="kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761455 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761468 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b6d14a5-ca00-40c7-af7a-051a98a24eed" volumeName="kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761481 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e19f9e8-9a37-4ca8-9790-c219750ab482" volumeName="kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761495 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761509 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761522 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761534 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761546 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="10603adc-d495-423c-9459-4caa405960bb" volumeName="kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761559 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d51f445-054a-4e4f-a67b-a828f5a32511" volumeName="kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761571 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" volumeName="kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761583 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" volumeName="kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761596 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761609 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761621 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761633 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="c085412c-b875-46c9-ae3e-e6b0d8067091" volumeName="kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761645 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" volumeName="kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761657 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="01feb2e0-a0f4-4573-8335-34e364e0ef40" volumeName="kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761671 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" volumeName="kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761684 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="5bacb25d-97b6-4491-8fb4-99feae1d802a" volumeName="kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761723 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" volumeName="kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761741 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a5ae51d-d173-4531-8975-f164c975ce1f" volumeName="kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761757 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="21d29937-debd-4407-b2b1-d1053cb0f342" volumeName="kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761775 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="887d596e-c519-4bfa-af90-3edd9e1b2f0f" volumeName="kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761792 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" volumeName="kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761809 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="120b38dc-8236-4fa6-a452-642b8ad738ee" volumeName="kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761830 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" volumeName="kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761843 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd556935-a077-45df-ba3f-d42c39326ccd" volumeName="kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761857 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" volumeName="kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761870 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed024e5d-8fc2-4c22-803d-73f3c9795f19" volumeName="kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761887 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="530553aa-0a1d-423e-8a22-f5eb4bdbb883" volumeName="kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761910 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" volumeName="kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761928 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" volumeName="kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761940 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="13045510-8717-4a71-ade4-be95a76440a7" volumeName="kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761952 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa90b3c2-febd-4588-a063-7fbbe82f00c1" volumeName="kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761968 3556 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54e8941-2fc4-432a-9e51-39684df9089e" volumeName="kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls" seLinuxMountContext="" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761980 3556 reconstruct_new.go:102] "Volume reconstruction finished" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.761991 3556 reconciler_new.go:29] "Reconciler: start to sync state" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.783271 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.786523 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.786592 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.786608 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:18 crc kubenswrapper[3556]: I1128 00:12:18.786647 3556 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:12:18 crc kubenswrapper[3556]: E1128 00:12:18.789277 3556 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:18.889932 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.904342 3556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.911561 3556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.911718 3556 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.911751 3556 kubelet.go:2343] "Starting kubelet main sync loop" Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:18.911820 3556 kubelet.go:2367] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 00:12:19 crc kubenswrapper[3556]: W1128 00:12:18.913413 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:18.913478 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.990213 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.992778 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.992824 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.992876 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:18.992912 3556 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:18.994198 3556 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:19.012345 3556 kubelet.go:2367] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.063643 3556 policy_none.go:49] "None policy: Start" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.065390 3556 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.065432 3556 state_mem.go:35] "Initializing new in-memory state store" Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:19.213352 3556 kubelet.go:2367] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.276410 3556 manager.go:296] "Starting Device Plugin manager" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.276547 3556 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.276569 3556 server.go:79] "Starting device plugin registration server" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.277326 3556 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.277452 3556 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.277475 3556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:19.286291 3556 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:19.291499 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.394625 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.396253 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.396309 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.396321 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.396357 3556 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:19.397356 3556 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.614486 3556 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.614609 3556 topology_manager.go:215] "Topology Admit Handler" podUID="d3ae206906481b4831fd849b559269c8" podNamespace="openshift-machine-config-operator" podName="kube-rbac-proxy-crio-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.614698 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.617167 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.617687 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.617698 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.617846 3556 topology_manager.go:215] "Topology Admit Handler" podUID="b2a6a3b2ca08062d24afa4c01aaf9e4f" podNamespace="openshift-etcd" podName="etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.617902 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.618067 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.618123 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619111 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619141 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619145 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619200 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619317 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619274 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619570 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ae85115fdc231b4002b57317b41a6400" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619669 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619711 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.619762 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621147 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621168 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621179 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621327 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621364 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621380 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621547 3556 topology_manager.go:215] "Topology Admit Handler" podUID="bd6a3a59e513625ca0ae3724df2686bc" podNamespace="openshift-kube-controller-manager" podName="kube-controller-manager-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621582 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621650 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.621688 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.622764 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.622794 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.622809 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.622938 3556 topology_manager.go:215] "Topology Admit Handler" podUID="6a57a7fb1944b43a6bd11a349520d301" podNamespace="openshift-kube-scheduler" podName="openshift-kube-scheduler-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.622966 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.622977 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.622988 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.623029 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.623295 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.623324 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.623773 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.623819 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.623832 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.624056 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.624130 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.625120 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.625149 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.625162 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.625160 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.625187 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.625198 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.666361 3556 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: W1128 00:12:19.669364 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:19.669485 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: W1128 00:12:19.731347 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:19.731507 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.781962 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782099 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782149 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782301 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782340 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782371 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782412 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782441 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782468 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782501 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782646 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782761 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782862 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.782930 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.783106 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: W1128 00:12:19.785502 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: E1128 00:12:19.785537 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885161 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885239 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885281 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885321 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885373 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bd6a3a59e513625ca0ae3724df2686bc-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"bd6a3a59e513625ca0ae3724df2686bc\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885427 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885362 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885428 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6a57a7fb1944b43a6bd11a349520d301-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"6a57a7fb1944b43a6bd11a349520d301\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885676 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885714 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885743 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885766 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-cert-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885773 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885836 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-data-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885846 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885871 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-usr-local-bin\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885905 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885929 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.885940 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-resource-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886049 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886064 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886102 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886121 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886163 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886200 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886216 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-static-pod-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886261 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886277 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886318 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d3ae206906481b4831fd849b559269c8-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d3ae206906481b4831fd849b559269c8\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.886387 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b2a6a3b2ca08062d24afa4c01aaf9e4f-log-dir\") pod \"etcd-crc\" (UID: \"b2a6a3b2ca08062d24afa4c01aaf9e4f\") " pod="openshift-etcd/etcd-crc" Nov 28 00:12:19 crc kubenswrapper[3556]: I1128 00:12:19.984570 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 00:12:20 crc kubenswrapper[3556]: W1128 00:12:20.009689 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ae206906481b4831fd849b559269c8.slice/crio-8baa4879c8b7ab4a3ce82e9aa0445fec90fad4172f8b13dd9f5893da4b890c0a WatchSource:0}: Error finding container 8baa4879c8b7ab4a3ce82e9aa0445fec90fad4172f8b13dd9f5893da4b890c0a: Status 404 returned error can't find the container with id 8baa4879c8b7ab4a3ce82e9aa0445fec90fad4172f8b13dd9f5893da4b890c0a Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.010285 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 00:12:20 crc kubenswrapper[3556]: W1128 00:12:20.023870 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a6a3b2ca08062d24afa4c01aaf9e4f.slice/crio-a001e61c2d75c26fb19a1d61f877b844f15c400ecad2478d6a12ae4e5b9ffa98 WatchSource:0}: Error finding container a001e61c2d75c26fb19a1d61f877b844f15c400ecad2478d6a12ae4e5b9ffa98: Status 404 returned error can't find the container with id a001e61c2d75c26fb19a1d61f877b844f15c400ecad2478d6a12ae4e5b9ffa98 Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.031504 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.039616 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.039811 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:20 crc kubenswrapper[3556]: W1128 00:12:20.055655 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd6a3a59e513625ca0ae3724df2686bc.slice/crio-8f07f5ccae868aad7760b73c599016f6b8e5a582590f0ea9b5f64fa52abddb54 WatchSource:0}: Error finding container 8f07f5ccae868aad7760b73c599016f6b8e5a582590f0ea9b5f64fa52abddb54: Status 404 returned error can't find the container with id 8f07f5ccae868aad7760b73c599016f6b8e5a582590f0ea9b5f64fa52abddb54 Nov 28 00:12:20 crc kubenswrapper[3556]: W1128 00:12:20.056726 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a57a7fb1944b43a6bd11a349520d301.slice/crio-679b3060cb7370b91bc3fa37c9ec377cf834ac59f21696269d35b04cd4ffde4f WatchSource:0}: Error finding container 679b3060cb7370b91bc3fa37c9ec377cf834ac59f21696269d35b04cd4ffde4f: Status 404 returned error can't find the container with id 679b3060cb7370b91bc3fa37c9ec377cf834ac59f21696269d35b04cd4ffde4f Nov 28 00:12:20 crc kubenswrapper[3556]: E1128 00:12:20.092546 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Nov 28 00:12:20 crc kubenswrapper[3556]: W1128 00:12:20.148260 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:20 crc kubenswrapper[3556]: E1128 00:12:20.148351 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.198340 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.200597 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.200660 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.200674 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.200700 3556 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:12:20 crc kubenswrapper[3556]: E1128 00:12:20.201467 3556 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.669095 3556 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.919773 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"9b7bb323c59113896b02d3a6a8d82c3d50ede56c6eb8a942c85308b4f55bcfd0"} Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.919843 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"679b3060cb7370b91bc3fa37c9ec377cf834ac59f21696269d35b04cd4ffde4f"} Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.921792 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"00c4fd2ed360e13891c41dd4a8e389d89e9453542b13dde1c17f926f7ba2d74c"} Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.921840 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"8f07f5ccae868aad7760b73c599016f6b8e5a582590f0ea9b5f64fa52abddb54"} Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.924045 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8"} Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.924096 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"edd322d311f8f5ac345a0a4e342698c4d54f5de6cb34b4532468fa25fc270727"} Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.925891 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"c4d3749515b4fcf401999daef09c1cfddfb7bf563fe7aeed04ce8f5099d50ee8"} Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.925937 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"a001e61c2d75c26fb19a1d61f877b844f15c400ecad2478d6a12ae4e5b9ffa98"} Nov 28 00:12:20 crc kubenswrapper[3556]: I1128 00:12:20.927160 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"8baa4879c8b7ab4a3ce82e9aa0445fec90fad4172f8b13dd9f5893da4b890c0a"} Nov 28 00:12:21 crc kubenswrapper[3556]: E1128 00:12:21.340451 3556 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c03429d315212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:12:18.662371858 +0000 UTC m=+0.254603848,LastTimestamp:2025-11-28 00:12:18.662371858 +0000 UTC m=+0.254603848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:12:21 crc kubenswrapper[3556]: W1128 00:12:21.592324 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:21 crc kubenswrapper[3556]: E1128 00:12:21.592422 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.666629 3556 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:21 crc kubenswrapper[3556]: E1128 00:12:21.694529 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="3.2s" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.801814 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.804124 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.804196 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.804224 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.804262 3556 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:12:21 crc kubenswrapper[3556]: E1128 00:12:21.805663 3556 kubelet_node_status.go:100] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.223:6443: connect: connection refused" node="crc" Nov 28 00:12:21 crc kubenswrapper[3556]: W1128 00:12:21.919635 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:21 crc kubenswrapper[3556]: E1128 00:12:21.919701 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.930097 3556 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8" exitCode=0 Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.930164 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerDied","Data":"238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8"} Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.930257 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.932871 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.932904 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.932918 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.933870 3556 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="c4d3749515b4fcf401999daef09c1cfddfb7bf563fe7aeed04ce8f5099d50ee8" exitCode=0 Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.934090 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"c4d3749515b4fcf401999daef09c1cfddfb7bf563fe7aeed04ce8f5099d50ee8"} Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.934187 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.935306 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.935591 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.935600 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.935545 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.936772 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.936794 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.936802 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.937905 3556 generic.go:334] "Generic (PLEG): container finished" podID="d3ae206906481b4831fd849b559269c8" containerID="d14200448e4458ec9fa8717b23d09bf0d010baae0b0268f016780541d9e6886b" exitCode=0 Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.937943 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerDied","Data":"d14200448e4458ec9fa8717b23d09bf0d010baae0b0268f016780541d9e6886b"} Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.943729 3556 generic.go:334] "Generic (PLEG): container finished" podID="6a57a7fb1944b43a6bd11a349520d301" containerID="9b7bb323c59113896b02d3a6a8d82c3d50ede56c6eb8a942c85308b4f55bcfd0" exitCode=0 Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.943803 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerDied","Data":"9b7bb323c59113896b02d3a6a8d82c3d50ede56c6eb8a942c85308b4f55bcfd0"} Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.944100 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.945180 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.945215 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:21 crc kubenswrapper[3556]: I1128 00:12:21.945226 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:22 crc kubenswrapper[3556]: W1128 00:12:22.136163 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:22 crc kubenswrapper[3556]: E1128 00:12:22.136304 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:22 crc kubenswrapper[3556]: W1128 00:12:22.412433 3556 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:22 crc kubenswrapper[3556]: E1128 00:12:22.412536 3556 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:22 crc kubenswrapper[3556]: I1128 00:12:22.666807 3556 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:22 crc kubenswrapper[3556]: I1128 00:12:22.950770 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:22 crc kubenswrapper[3556]: I1128 00:12:22.950802 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"f8167d39fa1d07b9565cc04c1789413635b39d3825d42d9474a4f501c4908f58"} Nov 28 00:12:22 crc kubenswrapper[3556]: I1128 00:12:22.952371 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:22 crc kubenswrapper[3556]: I1128 00:12:22.952430 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:22 crc kubenswrapper[3556]: I1128 00:12:22.952450 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.665510 3556 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": dial tcp 38.102.83.223:6443: connect: connection refused Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.963795 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"fcab63435700851fe6a94ed4be29ae966f8c45702fbaee7bd843b8f7a5641f2f"} Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.963930 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"9f7f1b7e5ee1852b69eb8d82cdb0420487fae3b724f5943c54541b5e04e06c06"} Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.966731 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"a42f5a37e78b02bf0d93bcaf01da23eb2c4966060b4ade3d1b6b3e26db97d268"} Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.966756 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"94343bc4605fa1eac03de87ea69d17b924155ee0800e855ad538b485fc3c606d"} Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.966855 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.968165 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.968209 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.968218 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.973529 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13"} Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.973563 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de"} Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.980445 3556 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="f90923ff67c94d2083a8e64f6c1ecccef377b78de8c55036a1fca9534ca844de" exitCode=0 Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.980510 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"f90923ff67c94d2083a8e64f6c1ecccef377b78de8c55036a1fca9534ca844de"} Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.980661 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.982280 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.982326 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.982337 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.984858 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d3ae206906481b4831fd849b559269c8","Type":"ContainerStarted","Data":"e8ac0d39636be37a8a3c9c4bf545cf70be373301d0e53bcd6c29e27ec9b409be"} Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.984960 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.987696 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.987744 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:23 crc kubenswrapper[3556]: I1128 00:12:23.987759 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.992656 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"6a57a7fb1944b43a6bd11a349520d301","Type":"ContainerStarted","Data":"872fb78a88feaf637503c1eab419037f680d43922a00345418c1caea32300698"} Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.992758 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.994786 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.994841 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.994861 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.997049 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2"} Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.997106 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d"} Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.997126 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"ae85115fdc231b4002b57317b41a6400","Type":"ContainerStarted","Data":"a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c"} Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.997124 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.998085 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.998134 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.998150 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.999618 3556 generic.go:334] "Generic (PLEG): container finished" podID="b2a6a3b2ca08062d24afa4c01aaf9e4f" containerID="0ead77a0be89e755ebce58aa332fb3ccfdc4e8e6a32cf66be132cf14beb42883" exitCode=0 Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.999681 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerDied","Data":"0ead77a0be89e755ebce58aa332fb3ccfdc4e8e6a32cf66be132cf14beb42883"} Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.999762 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:24 crc kubenswrapper[3556]: I1128 00:12:24.999802 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.001053 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.001110 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.001129 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.001228 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.001262 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.001276 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.006656 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.007839 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.007881 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.007895 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:25 crc kubenswrapper[3556]: I1128 00:12:25.007934 3556 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.005376 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"877a76707c1d0a5078900e740fbbbe723485eeaf3470a49bf8e6f432cc26fae6"} Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.005426 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"4b09645a36657571ad67d3d85f61c3d1ddb82d2de78638eab6cb1370156a8648"} Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.005448 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.005491 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.005603 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.005649 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.006594 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.006675 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.006697 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.006855 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.006888 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:26 crc kubenswrapper[3556]: I1128 00:12:26.006898 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.014668 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"a1ab0ac668945e9c5a9598853d6af96229af6ca3b9a1c94e8b54e028c99c4475"} Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.014749 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"b2a6a3b2ca08062d24afa4c01aaf9e4f","Type":"ContainerStarted","Data":"6a802b658c1869c6756355522286bfba0745eae7ffd401aff98289e982a1d206"} Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.014767 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.014831 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.016563 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.016594 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.016625 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.016642 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.016655 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.016668 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.254087 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.254342 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.254393 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.256037 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.256100 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.256124 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.503786 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.504004 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.505817 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.505872 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:27 crc kubenswrapper[3556]: I1128 00:12:27.505891 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.019271 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.021225 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.021278 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.021297 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.321788 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.443462 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.443788 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.445503 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.445563 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:28 crc kubenswrapper[3556]: I1128 00:12:28.445582 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.021748 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.023441 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.023501 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.023521 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:29 crc kubenswrapper[3556]: E1128 00:12:29.286409 3556 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.557441 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.557679 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.559217 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.559267 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.559285 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.868410 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.868685 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.870241 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.870285 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.870298 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:29 crc kubenswrapper[3556]: I1128 00:12:29.965388 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:30 crc kubenswrapper[3556]: I1128 00:12:30.023467 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:30 crc kubenswrapper[3556]: I1128 00:12:30.024367 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:30 crc kubenswrapper[3556]: I1128 00:12:30.024392 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:30 crc kubenswrapper[3556]: I1128 00:12:30.024402 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:31 crc kubenswrapper[3556]: I1128 00:12:31.919455 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 00:12:31 crc kubenswrapper[3556]: I1128 00:12:31.920248 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:31 crc kubenswrapper[3556]: I1128 00:12:31.922109 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:31 crc kubenswrapper[3556]: I1128 00:12:31.922192 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:31 crc kubenswrapper[3556]: I1128 00:12:31.922214 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:32 crc kubenswrapper[3556]: I1128 00:12:32.227707 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:32 crc kubenswrapper[3556]: I1128 00:12:32.227892 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:32 crc kubenswrapper[3556]: I1128 00:12:32.229520 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:32 crc kubenswrapper[3556]: I1128 00:12:32.229612 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:32 crc kubenswrapper[3556]: I1128 00:12:32.229652 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:32 crc kubenswrapper[3556]: I1128 00:12:32.235920 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:32 crc kubenswrapper[3556]: I1128 00:12:32.965599 3556 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:12:32 crc kubenswrapper[3556]: I1128 00:12:32.965820 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="bd6a3a59e513625ca0ae3724df2686bc" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 00:12:33 crc kubenswrapper[3556]: I1128 00:12:33.032830 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:33 crc kubenswrapper[3556]: I1128 00:12:33.034513 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:33 crc kubenswrapper[3556]: I1128 00:12:33.034593 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:33 crc kubenswrapper[3556]: I1128 00:12:33.034658 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:33 crc kubenswrapper[3556]: I1128 00:12:33.040066 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.034907 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.036234 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.036266 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.036281 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.390578 3556 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.390689 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.666961 3556 csi_plugin.go:880] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc": net/http: TLS handshake timeout Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.822807 3556 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.822948 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.829387 3556 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} Nov 28 00:12:34 crc kubenswrapper[3556]: I1128 00:12:34.830114 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 00:12:38 crc kubenswrapper[3556]: I1128 00:12:38.445330 3556 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 00:12:38 crc kubenswrapper[3556]: I1128 00:12:38.445439 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 00:12:39 crc kubenswrapper[3556]: E1128 00:12:39.287277 3556 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.566285 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.566528 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.566937 3556 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.567037 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.568343 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.568379 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.568396 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.576205 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:39 crc kubenswrapper[3556]: E1128 00:12:39.816350 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.819140 3556 trace.go:236] Trace[1006831786]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (28-Nov-2025 00:12:28.744) (total time: 11074ms): Nov 28 00:12:39 crc kubenswrapper[3556]: Trace[1006831786]: ---"Objects listed" error: 11074ms (00:12:39.818) Nov 28 00:12:39 crc kubenswrapper[3556]: Trace[1006831786]: [11.074208877s] [11.074208877s] END Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.819202 3556 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.823295 3556 trace.go:236] Trace[613858]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (28-Nov-2025 00:12:25.707) (total time: 14115ms): Nov 28 00:12:39 crc kubenswrapper[3556]: Trace[613858]: ---"Objects listed" error: 14115ms (00:12:39.823) Nov 28 00:12:39 crc kubenswrapper[3556]: Trace[613858]: [14.115976357s] [14.115976357s] END Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.823583 3556 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.823981 3556 trace.go:236] Trace[1065143731]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (28-Nov-2025 00:12:27.148) (total time: 12675ms): Nov 28 00:12:39 crc kubenswrapper[3556]: Trace[1065143731]: ---"Objects listed" error: 12675ms (00:12:39.823) Nov 28 00:12:39 crc kubenswrapper[3556]: Trace[1065143731]: [12.675519746s] [12.675519746s] END Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.824038 3556 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.824038 3556 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated" Nov 28 00:12:39 crc kubenswrapper[3556]: E1128 00:12:39.826405 3556 kubelet_node_status.go:100] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.829400 3556 trace.go:236] Trace[1937862942]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 (28-Nov-2025 00:12:27.324) (total time: 12504ms): Nov 28 00:12:39 crc kubenswrapper[3556]: Trace[1937862942]: ---"Objects listed" error: 12504ms (00:12:39.829) Nov 28 00:12:39 crc kubenswrapper[3556]: Trace[1937862942]: [12.504575816s] [12.504575816s] END Nov 28 00:12:39 crc kubenswrapper[3556]: I1128 00:12:39.829439 3556 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.064807 3556 kubelet.go:1935] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.246768 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.255791 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.260093 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.269139 3556 kubelet.go:1935] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.672630 3556 apiserver.go:52] "Watching apiserver" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.693481 3556 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.695991 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qdfr4","openshift-network-operator/network-operator-767c585db5-zd56b","openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc","openshift-image-registry/image-registry-75779c45fd-v2j2v","openshift-marketplace/community-operators-sdddl","openshift-marketplace/marketplace-operator-8b455464d-f9xdt","openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t","openshift-kube-scheduler/installer-8-crc","openshift-console/downloads-65476884b9-9wcvx","openshift-kube-controller-manager/revision-pruner-10-crc","openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv","openshift-machine-config-operator/machine-config-server-v65wr","openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz","openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-controller-manager/installer-10-crc","openshift-marketplace/redhat-operators-f4jkp","openshift-multus/multus-q88th","openshift-ovn-kubernetes/ovnkube-node-44qcg","openshift-dns-operator/dns-operator-75f687757b-nz2xb","openshift-ingress/router-default-5c9bf7bc58-6jctv","openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm","openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf","openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b","openshift-console/console-644bb77b49-5x5xk","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr","openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd","openshift-ingress-canary/ingress-canary-2vhcn","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7","openshift-kube-controller-manager/installer-11-crc","openshift-kube-controller-manager/revision-pruner-11-crc","openshift-machine-config-operator/machine-config-daemon-zpnhg","openshift-network-node-identity/network-node-identity-7xghp","openshift-dns/node-resolver-dn27q","openshift-etcd/etcd-crc","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7","openshift-kube-scheduler/installer-7-crc","openshift-multus/multus-admission-controller-6c7c885997-4hbbc","openshift-network-diagnostics/network-check-target-v54bt","openshift-etcd-operator/etcd-operator-768d5b5d86-722mg","openshift-kube-apiserver/kube-apiserver-crc","openshift-console-operator/console-conversion-webhook-595f9969b-l6z49","openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z","openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb","openshift-marketplace/certified-operators-7287f","openshift-network-operator/iptables-alerter-wwpnd","openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2","openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m","openshift-apiserver/apiserver-7fc54b8dd7-d2bhp","openshift-marketplace/redhat-marketplace-8s8pc","openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8","openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg","openshift-multus/multus-additional-cni-plugins-bzj2p","openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7","openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh","openshift-service-ca/service-ca-666f99b6f-kk8kg","hostpath-provisioner/csi-hostpathplugin-hvm8g","openshift-kube-controller-manager/installer-10-retry-1-crc","openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j","openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs","openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9","openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh","openshift-dns/dns-default-gbw49","openshift-kube-apiserver/installer-9-crc","openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw","openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2","openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b","openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46","openshift-console-operator/console-operator-5dbbc74dc9-cp5cd","openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz","openshift-kube-apiserver/installer-12-crc","openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb","openshift-kube-controller-manager/revision-pruner-8-crc","openshift-kube-controller-manager/revision-pruner-9-crc","openshift-marketplace/community-operators-8jhz6","openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd","openshift-controller-manager/controller-manager-778975cc4f-x5vcf","openshift-image-registry/node-ca-l92hr"] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696097 3556 topology_manager.go:215] "Topology Admit Handler" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" podNamespace="openshift-service-ca-operator" podName="service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696236 3556 topology_manager.go:215] "Topology Admit Handler" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" podNamespace="openshift-operator-lifecycle-manager" podName="package-server-manager-84d578d794-jw7r2" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696296 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" podNamespace="openshift-kube-apiserver-operator" podName="kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696353 3556 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" podNamespace="openshift-operator-lifecycle-manager" podName="catalog-operator-857456c46-7f5wf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696408 3556 topology_manager.go:215] "Topology Admit Handler" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" podNamespace="openshift-machine-config-operator" podName="machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696472 3556 topology_manager.go:215] "Topology Admit Handler" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696496 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696565 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696578 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696622 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.696745 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.696818 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.696997 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.697133 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.697193 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.697214 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.697289 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.697360 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.697419 3556 topology_manager.go:215] "Topology Admit Handler" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" podNamespace="openshift-operator-lifecycle-manager" podName="olm-operator-6d8474f75f-x54mh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.697691 3556 topology_manager.go:215] "Topology Admit Handler" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" podNamespace="openshift-machine-api" podName="machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.697837 3556 topology_manager.go:215] "Topology Admit Handler" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" podNamespace="openshift-etcd-operator" podName="etcd-operator-768d5b5d86-722mg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.697971 3556 topology_manager.go:215] "Topology Admit Handler" podUID="cc291782-27d2-4a74-af79-c7dcb31535d2" podNamespace="openshift-network-operator" podName="network-operator-767c585db5-zd56b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.698151 3556 topology_manager.go:215] "Topology Admit Handler" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" podNamespace="openshift-machine-api" podName="control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.698226 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.698301 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.698340 3556 topology_manager.go:215] "Topology Admit Handler" podUID="71af81a9-7d43-49b2-9287-c375900aa905" podNamespace="openshift-kube-scheduler-operator" podName="openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.698386 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.698560 3556 topology_manager.go:215] "Topology Admit Handler" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" podNamespace="openshift-config-operator" podName="openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.698605 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.698720 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" podNamespace="openshift-authentication-operator" podName="authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.701565 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.701733 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.702256 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" podNamespace="openshift-kube-storage-version-migrator-operator" podName="kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.702621 3556 topology_manager.go:215] "Topology Admit Handler" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" podNamespace="openshift-controller-manager-operator" podName="openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.702978 3556 topology_manager.go:215] "Topology Admit Handler" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" podNamespace="openshift-kube-controller-manager-operator" podName="kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.703391 3556 topology_manager.go:215] "Topology Admit Handler" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" podNamespace="openshift-apiserver-operator" podName="openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.703782 3556 topology_manager.go:215] "Topology Admit Handler" podUID="10603adc-d495-423c-9459-4caa405960bb" podNamespace="openshift-dns-operator" podName="dns-operator-75f687757b-nz2xb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704287 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704340 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.704532 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704609 3556 topology_manager.go:215] "Topology Admit Handler" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" podNamespace="openshift-image-registry" podName="cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704639 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704652 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704661 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704819 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704842 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704680 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.704606 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.705542 3556 topology_manager.go:215] "Topology Admit Handler" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" podNamespace="openshift-multus" podName="multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.705668 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.705912 3556 topology_manager.go:215] "Topology Admit Handler" podUID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" podNamespace="openshift-multus" podName="multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.706074 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.706170 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.706255 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.706362 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.706440 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.706544 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.706562 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.706629 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.706655 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.706747 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.706965 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.707053 3556 topology_manager.go:215] "Topology Admit Handler" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" podNamespace="openshift-multus" podName="network-metrics-daemon-qdfr4" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.706740 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.707885 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.707961 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.708046 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.708152 3556 topology_manager.go:215] "Topology Admit Handler" podUID="410cf605-1970-4691-9c95-53fdc123b1f3" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.708195 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.708268 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.708777 3556 topology_manager.go:215] "Topology Admit Handler" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" podNamespace="openshift-network-diagnostics" podName="network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.709157 3556 topology_manager.go:215] "Topology Admit Handler" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" podNamespace="openshift-network-diagnostics" podName="network-check-target-v54bt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.709521 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.709684 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.709735 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.709574 3556 topology_manager.go:215] "Topology Admit Handler" podUID="51a02bbf-2d40-4f84-868a-d399ea18a846" podNamespace="openshift-network-node-identity" podName="network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.710241 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.709593 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.710673 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.717509 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.718183 3556 topology_manager.go:215] "Topology Admit Handler" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.718675 3556 topology_manager.go:215] "Topology Admit Handler" podUID="2b6d14a5-ca00-40c7-af7a-051a98a24eed" podNamespace="openshift-network-operator" podName="iptables-alerter-wwpnd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.718959 3556 topology_manager.go:215] "Topology Admit Handler" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" podNamespace="openshift-kube-storage-version-migrator" podName="migrator-f7c6d88df-q2fnv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.719227 3556 topology_manager.go:215] "Topology Admit Handler" podUID="6a23c0ee-5648-448c-b772-83dced2891ce" podNamespace="openshift-dns" podName="node-resolver-dn27q" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.719470 3556 topology_manager.go:215] "Topology Admit Handler" podUID="13045510-8717-4a71-ade4-be95a76440a7" podNamespace="openshift-dns" podName="dns-default-gbw49" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.719727 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9fb762d1-812f-43f1-9eac-68034c1ecec7" podNamespace="openshift-cluster-version" podName="cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.720006 3556 topology_manager.go:215] "Topology Admit Handler" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" podNamespace="openshift-oauth-apiserver" podName="apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.720356 3556 topology_manager.go:215] "Topology Admit Handler" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" podNamespace="openshift-operator-lifecycle-manager" podName="packageserver-8464bcc55b-sjnqz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.720658 3556 topology_manager.go:215] "Topology Admit Handler" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" podNamespace="openshift-ingress-operator" podName="ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.720935 3556 topology_manager.go:215] "Topology Admit Handler" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" podNamespace="openshift-cluster-samples-operator" podName="cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.721246 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ec1bae8b-3200-4ad9-b33b-cf8701f3027c" podNamespace="openshift-cluster-machine-approver" podName="machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.721521 3556 topology_manager.go:215] "Topology Admit Handler" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" podNamespace="openshift-ingress" podName="router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.721809 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" podNamespace="openshift-machine-config-operator" podName="machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.722231 3556 topology_manager.go:215] "Topology Admit Handler" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" podNamespace="openshift-console-operator" podName="console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.722606 3556 topology_manager.go:215] "Topology Admit Handler" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" podNamespace="openshift-console-operator" podName="console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.722926 3556 topology_manager.go:215] "Topology Admit Handler" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" podNamespace="openshift-machine-config-operator" podName="machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.723271 3556 topology_manager.go:215] "Topology Admit Handler" podUID="6268b7fe-8910-4505-b404-6f1df638105c" podNamespace="openshift-console" podName="downloads-65476884b9-9wcvx" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.723582 3556 topology_manager.go:215] "Topology Admit Handler" podUID="bf1a8b70-3856-486f-9912-a2de1d57c3fb" podNamespace="openshift-machine-config-operator" podName="machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.724065 3556 topology_manager.go:215] "Topology Admit Handler" podUID="f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e" podNamespace="openshift-image-registry" podName="node-ca-l92hr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.724465 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.724595 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.724699 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.724916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.725163 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.725304 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.725404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.725649 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.725899 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.726366 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.726403 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.726596 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.726739 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.727417 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.727577 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.727692 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.728328 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.728509 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.728650 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.728786 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.724478 3556 topology_manager.go:215] "Topology Admit Handler" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" podNamespace="openshift-ingress-canary" podName="ingress-canary-2vhcn" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.729195 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.729573 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.729653 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.729713 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.729731 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.729786 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.729942 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.729968 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.730045 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.730650 3556 topology_manager.go:215] "Topology Admit Handler" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" podNamespace="openshift-multus" podName="multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.730732 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.730823 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.731175 3556 topology_manager.go:215] "Topology Admit Handler" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" podNamespace="hostpath-provisioner" podName="csi-hostpathplugin-hvm8g" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.731455 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.731910 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.732113 3556 topology_manager.go:215] "Topology Admit Handler" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" podNamespace="openshift-marketplace" podName="certified-operators-7287f" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.732419 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.732636 3556 topology_manager.go:215] "Topology Admit Handler" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" podNamespace="openshift-marketplace" podName="community-operators-8jhz6" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.732663 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.733045 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.733081 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.733197 3556 topology_manager.go:215] "Topology Admit Handler" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" podNamespace="openshift-marketplace" podName="redhat-marketplace-8s8pc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.733305 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.733447 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.733636 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.733717 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.733786 3556 topology_manager.go:215] "Topology Admit Handler" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" podNamespace="openshift-marketplace" podName="redhat-operators-f4jkp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.732678 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.734217 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.734353 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.734444 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.734534 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.734650 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.734836 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.734970 3556 topology_manager.go:215] "Topology Admit Handler" podUID="72854c1e-5ae2-4ed6-9e50-ff3bccde2635" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-8-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.735120 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.735265 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.735072 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.735568 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.736175 3556 topology_manager.go:215] "Topology Admit Handler" podUID="e4a7de23-6134-4044-902a-0900dc04a501" podNamespace="openshift-service-ca" podName="service-ca-666f99b6f-kk8kg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.736643 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-8-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.737162 3556 topology_manager.go:215] "Topology Admit Handler" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251920-wcws2" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.737176 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.737359 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.738069 3556 topology_manager.go:215] "Topology Admit Handler" podUID="a0453d24-e872-43af-9e7a-86227c26d200" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-9-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.738281 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.746234 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.746331 3556 topology_manager.go:215] "Topology Admit Handler" podUID="2ad657a4-8b02-4373-8d0d-b0e25345dc90" podNamespace="openshift-kube-apiserver" podName="installer-9-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.746961 3556 topology_manager.go:215] "Topology Admit Handler" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" podNamespace="openshift-image-registry" podName="image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.747096 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.747275 3556 topology_manager.go:215] "Topology Admit Handler" podUID="b57cce81-8ea0-4c4d-aae1-ee024d201c15" podNamespace="openshift-kube-scheduler" podName="installer-7-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.747404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.747547 3556 topology_manager.go:215] "Topology Admit Handler" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" podNamespace="openshift-authentication" podName="oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.747685 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.747846 3556 topology_manager.go:215] "Topology Admit Handler" podUID="2f155735-a9be-4621-a5f2-5ab4b6957acd" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-10-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.747943 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.748208 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.748306 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.748411 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.748426 3556 topology_manager.go:215] "Topology Admit Handler" podUID="79050916-d488-4806-b556-1b0078b31e53" podNamespace="openshift-kube-controller-manager" podName="installer-10-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.748913 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" podNamespace="openshift-console" podName="console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.749202 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.749238 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.749284 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.749403 3556 topology_manager.go:215] "Topology Admit Handler" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" podNamespace="openshift-apiserver" podName="apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.749650 3556 topology_manager.go:215] "Topology Admit Handler" podUID="dc02677d-deed-4cc9-bb8c-0dd300f83655" podNamespace="openshift-kube-controller-manager" podName="installer-10-retry-1-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.749849 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.750063 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.750234 3556 topology_manager.go:215] "Topology Admit Handler" podUID="1784282a-268d-4e44-a766-43281414e2dc" podNamespace="openshift-kube-controller-manager" podName="revision-pruner-11-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.750924 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-retry-1-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.750937 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.751849 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.752577 3556 topology_manager.go:215] "Topology Admit Handler" podUID="aca1f9ff-a685-4a78-b461-3931b757f754" podNamespace="openshift-kube-scheduler" podName="installer-8-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.754316 3556 topology_manager.go:215] "Topology Admit Handler" podUID="a45bfab9-f78b-4d72-b5b7-903e60401124" podNamespace="openshift-kube-controller-manager" podName="installer-11-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.754707 3556 topology_manager.go:215] "Topology Admit Handler" podUID="3557248c-8f70-4165-aa66-8df983e7e01a" podNamespace="openshift-kube-apiserver" podName="installer-12-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.755136 3556 topology_manager.go:215] "Topology Admit Handler" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.755338 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.755382 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.755638 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.755810 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.755905 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.756617 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.757093 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.758327 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.758841 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.758963 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.758866 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.759460 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.759767 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.760343 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.760558 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.762350 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.762455 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.762533 3556 topology_manager.go:215] "Topology Admit Handler" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" podNamespace="openshift-controller-manager" podName="controller-manager-778975cc4f-x5vcf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.762363 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.762941 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.762989 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.763389 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.763518 3556 topology_manager.go:215] "Topology Admit Handler" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251935-d7x6j" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.763668 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d67253e-2acd-4bc1-8185-793587da4f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24216f0c25a6e1d33af5f8798e7066a97c6c468ad09b8fad7342ee280db29d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7555d542c802e58046a90350e414a08c9d856a865303fa64131537f1cc00fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:09Z\\\"}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-546b4f8984-pwccz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.763894 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.764101 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.764158 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.764317 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.764544 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.764607 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.764661 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.765062 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.765123 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.764133 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ad171c4b-8408-4370-8e86-502999788ddb" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29251950-x8jjd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.765458 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.764141 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.765648 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.765700 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.765730 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.766320 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.766896 3556 topology_manager.go:215] "Topology Admit Handler" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" podNamespace="openshift-marketplace" podName="community-operators-sdddl" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.767138 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.767878 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.768110 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.768414 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.768717 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.768910 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.769501 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.788513 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3ef5d43082d2ea06ff8ebdc73d431372f8a376212f30c5008a7b9579df7014\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:00:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:05Z\\\"}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd9205b185124b3b67669bb3166734f9e22831957c457aa1083f4f2bc4750312\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-bc474d5d6-wshwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.794372 3556 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.803493 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d0dcce3-d96e-48cb-9b9f-362105911589\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89a64d46c29f00f7b312c28b56d205ce2494ead0d57a058e5e012245963e665\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:57:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:54:10Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zpnhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.818775 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d51f445-054a-4e4f-a67b-a828f5a32511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6ead43bdb764cbbb4c3390efab755e94af49cb95729c3c5d78be72155f2cf72\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://200de7f83d9a904f95a828b45ad75259caec176a8dddad3b3d43cc421fdead44\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-08-13T20:07:30Z\\\",\\\"message\\\":\\\" request from succeeding\\\\nW0813 20:07:30.198690 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Pod ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.201950 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Event ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.198766 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.198484 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.ConfigMap ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.202220 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW0813 20:07:30.199382 1 reflector.go:462] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:105: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2025-08-13T20:07:30.223Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1650\\\\tWait completed, proceeding to shutdown the manager\\\\n2025-08-13T20:07:30.228Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:944\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T20:05:07Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-7d46d5bb6d-rrg6t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830066 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830137 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830182 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830223 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830285 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830351 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830411 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830454 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830507 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830641 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830688 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830732 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830773 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830821 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830864 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830910 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.830953 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.831002 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.831073 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.831116 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.831164 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.831219 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.831261 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.832360 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.832838 3556 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.832877 3556 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.833043 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.332946736 +0000 UTC m=+22.925178936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.831983 3556 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.833668 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"297ab9b6-2186-4d5b-a952-2bfd59af63c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6ee79b28e5b577df5d2e78c5d20b367b69a4eb87a6cd831a6c711e24daab251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-6df6df6b6b-58shh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.833845 3556 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.833898 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.333886347 +0000 UTC m=+22.926118357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.834480 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.834703 3556 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.834839 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.334809539 +0000 UTC m=+22.927041539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.834840 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.834916 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.334904411 +0000 UTC m=+22.927136421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.835533 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.835611 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.835757 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.835826 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.835906 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.835946 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.835988 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.836032 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.336000717 +0000 UTC m=+22.928232717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836067 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836103 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836137 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836168 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836199 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836230 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836259 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836292 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836320 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836349 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836382 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836413 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836448 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836479 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836509 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836696 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836755 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836827 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836880 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836915 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836954 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.836997 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838142 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838183 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838220 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838269 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838311 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838350 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838384 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838421 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838453 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838483 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838521 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838552 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838582 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838614 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838649 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838680 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838718 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838755 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838786 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838817 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838845 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838873 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838905 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.838937 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839051 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839087 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839120 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839148 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839179 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839209 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839237 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839269 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839297 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839330 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839363 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839393 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839423 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839453 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839483 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839514 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839545 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839572 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839604 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839638 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839669 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839733 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839765 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839794 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839822 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839854 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839885 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839914 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839944 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.839973 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840000 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840054 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840088 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840122 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840153 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840180 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840210 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840239 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840269 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840302 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840329 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840361 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840392 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840432 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840478 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840514 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.840515 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840545 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840580 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.840612 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.340579184 +0000 UTC m=+22.932811204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.840648 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840659 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.840700 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.340683377 +0000 UTC m=+22.932915577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840740 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.840767 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.840784 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842188 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-ovnkube-identity-cm\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.842594 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.342571842 +0000 UTC m=+22.934804052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842659 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842705 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842751 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842804 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842853 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842897 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842931 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.842968 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843043 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843084 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843133 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843178 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843222 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843263 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843309 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843355 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843399 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843450 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843500 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843544 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843589 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843631 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843675 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843748 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843795 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843838 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843880 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843926 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.843970 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844040 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844078 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844109 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844167 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844210 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844249 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844299 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844342 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844384 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844433 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844481 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844528 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844569 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844616 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844660 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844705 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844747 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844792 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844841 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.844889 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.848271 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.848357 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.348325166 +0000 UTC m=+22.940557176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.848462 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.848513 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.34849839 +0000 UTC m=+22.940730400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.848610 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.848685 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.348657764 +0000 UTC m=+22.940889764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.848786 3556 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.848873 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.348849538 +0000 UTC m=+22.941081528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.849061 3556 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.849122 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.349091574 +0000 UTC m=+22.941323584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.849363 3556 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.849401 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.34939082 +0000 UTC m=+22.941622820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.849725 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.849764 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.34975422 +0000 UTC m=+22.941986220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.849799 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.849827 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.349818591 +0000 UTC m=+22.942050591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.849819 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.850390 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.350378704 +0000 UTC m=+22.942610704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.850739 3556 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.850847 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.350819254 +0000 UTC m=+22.943051254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.850744 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.850859 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851006 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851077 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851116 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851156 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851198 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851230 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851264 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851296 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851327 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851359 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851393 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851451 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851512 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851614 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851645 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851674 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851688 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851759 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851778 3556 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851784 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851734 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851890 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.351813927 +0000 UTC m=+22.944045927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851918 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.351904519 +0000 UTC m=+22.944136519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851928 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851952 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.851968 3556 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.852319 3556 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.852967 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.352004682 +0000 UTC m=+22.944236882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.852997 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.352986575 +0000 UTC m=+22.945218585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.853180 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bd556935-a077-45df-ba3f-d42c39326ccd-tmpfs\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.853184 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51a02bbf-2d40-4f84-868a-d399ea18a846-env-overrides\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.853247 3556 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.853295 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.353283152 +0000 UTC m=+22.945515162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.853339 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.853366 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.353357634 +0000 UTC m=+22.945589634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.853765 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.853785 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.853797 3556 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.853880 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.353843556 +0000 UTC m=+22.946075556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.851953 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.853937 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.853973 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854007 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854072 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854106 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854143 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854211 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854245 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854278 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854277 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-binary-copy\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.854311 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.862187 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862295 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.862336 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.362293183 +0000 UTC m=+22.954525213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862417 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862517 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862577 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862637 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862685 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862742 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862795 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862869 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862918 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.862975 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863068 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863121 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863172 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863218 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863268 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863314 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863361 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863406 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.863451 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.866673 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.866967 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.868290 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.868329 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.868348 3556 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.868428 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.368399616 +0000 UTC m=+22.960631796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.868655 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.868671 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.868682 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.868712 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.368703803 +0000 UTC m=+22.960935793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.862424 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.869069 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.369059612 +0000 UTC m=+22.961291602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.869815 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.869848 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.36983971 +0000 UTC m=+22.962071700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.856307 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-auth-proxy-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.856824 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.857105 3556 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870171 3556 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870182 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870214 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.370206818 +0000 UTC m=+22.962438808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.857627 3556 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870258 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.370250269 +0000 UTC m=+22.962482259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.857776 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.857876 3556 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870313 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.37030616 +0000 UTC m=+22.962538150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.857937 3556 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870352 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.370346591 +0000 UTC m=+22.962578581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.858116 3556 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870392 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.370386742 +0000 UTC m=+22.962618722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.858152 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870421 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.370416133 +0000 UTC m=+22.962648123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.858372 3556 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870454 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.370449064 +0000 UTC m=+22.962681054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.858407 3556 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870483 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.370478725 +0000 UTC m=+22.962710715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.858469 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.870512 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.370505926 +0000 UTC m=+22.962737916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.871868 3556 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.872054 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.372032472 +0000 UTC m=+22.964264472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.872316 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.872806 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.873190 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.873325 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.373309272 +0000 UTC m=+22.965541272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.874937 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fb762d1-812f-43f1-9eac-68034c1ecec7-service-ca\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.862000 3556 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.856158 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-multus-daemon-config\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.875157 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.875203 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.875579 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.875910 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.875947 3556 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.875976 3556 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.876005 3556 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.876067 3556 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.876095 3556 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.877441 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-node-bootstrap-token\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.877557 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.878591 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.378570674 +0000 UTC m=+22.970802674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.878834 3556 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.879626 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.379610369 +0000 UTC m=+22.971842369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.882135 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/297ab9b6-2186-4d5b-a952-2bfd59af63c4-mcc-auth-proxy-config\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.882242 3556 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.882372 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.382333232 +0000 UTC m=+22.974565242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.882436 3556 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.882476 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.382467756 +0000 UTC m=+22.974699756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.882527 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.883022 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.883079 3556 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.883162 3556 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.883287 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.883329 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.883622 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.883722 3556 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.883903 3556 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.884214 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.884242 3556 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.884267 3556 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.884304 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.884541 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.884729 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.884733 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.884766 3556 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.884879 3556 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.885169 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.385143188 +0000 UTC m=+22.977375198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.890584 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.390573326 +0000 UTC m=+22.982805316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.890600 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.390591476 +0000 UTC m=+22.982823466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.890618 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.390608336 +0000 UTC m=+22.982840326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890657 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890690 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890713 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890736 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890758 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890781 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890803 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890828 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890852 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890878 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890903 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890940 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890970 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.890996 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891032 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891062 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891090 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891115 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891145 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891174 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891198 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891223 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891247 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891273 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891297 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891319 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891342 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891367 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891391 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891413 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.891436 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.895143 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-auth-proxy-config\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.885227 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.885268 3556 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.885307 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.885389 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.885428 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.885571 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-env-overrides\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.885644 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.886574 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-config\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.886629 3556 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.886688 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.886876 3556 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.886979 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-serviceca\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.887235 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.887309 3556 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.887567 3556 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.888298 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b6d14a5-ca00-40c7-af7a-051a98a24eed-iptables-alerter-script\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.888353 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.888502 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.889244 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.889280 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.889312 3556 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.889344 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.889376 3556 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.889834 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-metrics-certs\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.889853 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/410cf605-1970-4691-9c95-53fdc123b1f3-ovnkube-config\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895487 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.39547826 +0000 UTC m=+22.987710250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895501 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.39549466 +0000 UTC m=+22.987726650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895515 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395508061 +0000 UTC m=+22.987740051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895527 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395521581 +0000 UTC m=+22.987753571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895541 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395533681 +0000 UTC m=+22.987765671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895552 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395547372 +0000 UTC m=+22.987779362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895564 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395558692 +0000 UTC m=+22.987790682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895576 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395569822 +0000 UTC m=+22.987801812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895586 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395581512 +0000 UTC m=+22.987813502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895598 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395592053 +0000 UTC m=+22.987824033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895609 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395604643 +0000 UTC m=+22.987836633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895620 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395615403 +0000 UTC m=+22.987847383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895631 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395625923 +0000 UTC m=+22.987857913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895641 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395636394 +0000 UTC m=+22.987868384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895652 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395647734 +0000 UTC m=+22.987879714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895662 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395657574 +0000 UTC m=+22.987889564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895672 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395667255 +0000 UTC m=+22.987899245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895702 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395696616 +0000 UTC m=+22.987928616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895715 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395709176 +0000 UTC m=+22.987941176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895728 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395722347 +0000 UTC m=+22.987954337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895739 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395733677 +0000 UTC m=+22.987965667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.895750 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.395745787 +0000 UTC m=+22.987977777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.896264 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.896752 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.897151 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/475321a1-8b7e-4033-8f72-b05a8b377347-cni-binary-copy\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.898467 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa90b3c2-febd-4588-a063-7fbbe82f00c1-service-ca-bundle\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.898512 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d0dcce3-d96e-48cb-9b9f-362105911589-mcd-auth-proxy-config\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899098 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399068174 +0000 UTC m=+22.991300184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899169 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399159687 +0000 UTC m=+22.991391687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899206 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399198267 +0000 UTC m=+22.991430277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899244 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399233848 +0000 UTC m=+22.991465858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899283 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399274699 +0000 UTC m=+22.991506709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899318 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.39930978 +0000 UTC m=+22.991541780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899363 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399355531 +0000 UTC m=+22.991587541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899399 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399391722 +0000 UTC m=+22.991623722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899438 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399430773 +0000 UTC m=+22.991662773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899476 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399467434 +0000 UTC m=+22.991699444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899511 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399502764 +0000 UTC m=+22.991734764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899566 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399547725 +0000 UTC m=+22.991779915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899602 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899622 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899638 3556 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899709 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899722 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899732 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.899857 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-f4jkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4092a9f8-5acc-4932-9e90-ef962eeb301a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd647545b1e1e8133f835b5842318c4a574964a1089d0c79e368492f43f4be0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-f4jkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899940 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899951 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899959 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.900086 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900159 3556 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900201 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900216 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900224 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900288 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900300 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900308 3556 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900325 3556 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900382 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900395 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900403 3556 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.900476 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900622 3556 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900637 3556 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.900644 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.900703 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.900756 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc291782-27d2-4a74-af79-c7dcb31535d2-metrics-tls\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.901099 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/530553aa-0a1d-423e-8a22-f5eb4bdbb883-available-featuregates\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901144 3556 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901201 3556 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901281 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901320 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901362 3556 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901423 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901456 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901963 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901981 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.901989 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.902392 3556 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.906092 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzjk\" (UniqueName: \"kubernetes.io/projected/9d0dcce3-d96e-48cb-9b9f-362105911589-kube-api-access-xkzjk\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906135 3556 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906146 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906205 3556 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906226 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906356 3556 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906478 3556 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906495 3556 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906512 3556 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906579 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906601 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.906614 3556 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.899610 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.399601437 +0000 UTC m=+22.991833447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909181 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909213 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909227 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909260 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409197281 +0000 UTC m=+23.001429271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909288 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409278653 +0000 UTC m=+23.001510643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909310 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409303344 +0000 UTC m=+23.001535334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909329 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409322174 +0000 UTC m=+23.001554164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909343 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409336605 +0000 UTC m=+23.001568585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909356 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409348945 +0000 UTC m=+23.001580935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909376 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409367175 +0000 UTC m=+23.001599165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909394 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409386926 +0000 UTC m=+23.001618916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909406 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409399486 +0000 UTC m=+23.001631476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909419 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409412826 +0000 UTC m=+23.001644816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909434 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409426777 +0000 UTC m=+23.001658767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909449 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409442917 +0000 UTC m=+23.001674907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909465 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409458657 +0000 UTC m=+23.001690647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909478 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409471788 +0000 UTC m=+23.001703778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909489 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409483298 +0000 UTC m=+23.001715288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909501 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409494658 +0000 UTC m=+23.001726648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909515 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409509539 +0000 UTC m=+23.001741529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909532 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409526689 +0000 UTC m=+23.001758679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909547 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409539909 +0000 UTC m=+23.001771899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909561 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.40955446 +0000 UTC m=+23.001786450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909575 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.40956792 +0000 UTC m=+23.001799910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909587 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.40958149 +0000 UTC m=+23.001813480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909603 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409596561 +0000 UTC m=+23.001828551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909616 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409610181 +0000 UTC m=+23.001842161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909629 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409623991 +0000 UTC m=+23.001855981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909647 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409641062 +0000 UTC m=+23.001873052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909660 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409654262 +0000 UTC m=+23.001886252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.909674 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.409667563 +0000 UTC m=+23.001899553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.911148 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.911166 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.911177 3556 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.911214 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.411201499 +0000 UTC m=+23.003433489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912270 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912283 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912292 3556 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912351 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912361 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912368 3556 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912411 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.412402906 +0000 UTC m=+23.004634896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912597 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912625 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.412618652 +0000 UTC m=+23.004850642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912651 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.412644173 +0000 UTC m=+23.004876153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912699 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.412693974 +0000 UTC m=+23.004925964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.912799 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912839 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.912870 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.412864318 +0000 UTC m=+23.005096298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.913331 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.413321918 +0000 UTC m=+23.005553908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.913398 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.41339226 +0000 UTC m=+23.005624250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.921646 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2n9\" (UniqueName: \"kubernetes.io/projected/bf1a8b70-3856-486f-9912-a2de1d57c3fb-kube-api-access-6z2n9\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.922872 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.922906 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.922922 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.922999 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.422975354 +0000 UTC m=+23.015207344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.936597 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr9t\" (UniqueName: \"kubernetes.io/projected/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-kube-api-access-4qr9t\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.937077 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.937884 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6dp\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-kube-api-access-9x6dp\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.938404 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/410cf605-1970-4691-9c95-53fdc123b1f3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.938693 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b54e8941-2fc4-432a-9e51-39684df9089e-bound-sa-token\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.938819 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfhc\" (UniqueName: \"kubernetes.io/projected/cc291782-27d2-4a74-af79-c7dcb31535d2-kube-api-access-4sfhc\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.939265 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qn7\" (UniqueName: \"kubernetes.io/projected/2b6d14a5-ca00-40c7-af7a-051a98a24eed-kube-api-access-j4qn7\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.939442 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/51a02bbf-2d40-4f84-868a-d399ea18a846-webhook-cert\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.939564 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-default-certificate\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.939716 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ec1bae8b-3200-4ad9-b33b-cf8701f3027c-machine-approver-tls\") pod \"machine-approver-7874c8775-kh4j9\" (UID: \"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\") " pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.939981 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940128 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940156 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940171 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940234 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.440216178 +0000 UTC m=+23.032448168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940349 3556 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940373 3556 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940382 3556 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940416 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.440408272 +0000 UTC m=+23.032640262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940476 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940488 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940496 3556 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940528 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.440520016 +0000 UTC m=+23.032752006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940606 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940621 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940630 3556 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940662 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.440656319 +0000 UTC m=+23.032888309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940717 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940734 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940765 3556 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.940799 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.440787582 +0000 UTC m=+23.033019562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.940889 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgqn\" (UniqueName: \"kubernetes.io/projected/297ab9b6-2186-4d5b-a952-2bfd59af63c4-kube-api-access-vtgqn\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.941231 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjml\" (UniqueName: \"kubernetes.io/projected/13045510-8717-4a71-ade4-be95a76440a7-kube-api-access-dtjml\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.941292 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.941449 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fb762d1-812f-43f1-9eac-68034c1ecec7-serving-cert\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.941455 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkfv\" (UniqueName: \"kubernetes.io/projected/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-kube-api-access-rkkfv\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.942186 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/aa90b3c2-febd-4588-a063-7fbbe82f00c1-stats-auth\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942340 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942351 3556 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942406 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942482 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.442452801 +0000 UTC m=+23.034684791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942368 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942517 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942566 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.442549333 +0000 UTC m=+23.034781323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942575 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942617 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942631 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942715 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.442691046 +0000 UTC m=+23.034923026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.940169 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d0dcce3-d96e-48cb-9b9f-362105911589-proxy-tls\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.942978 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.943004 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.943039 3556 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: E1128 00:12:40.943086 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.443071795 +0000 UTC m=+23.035303775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.943679 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjg2w\" (UniqueName: \"kubernetes.io/projected/51a02bbf-2d40-4f84-868a-d399ea18a846-kube-api-access-zjg2w\") pod \"network-node-identity-7xghp\" (UID: \"51a02bbf-2d40-4f84-868a-d399ea18a846\") " pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.948324 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-bound-sa-token\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.948399 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4f9\" (UniqueName: \"kubernetes.io/projected/410cf605-1970-4691-9c95-53fdc123b1f3-kube-api-access-cx4f9\") pod \"ovnkube-control-plane-77c846df58-6l97b\" (UID: \"410cf605-1970-4691-9c95-53fdc123b1f3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.949626 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2f8t\" (UniqueName: \"kubernetes.io/projected/475321a1-8b7e-4033-8f72-b05a8b377347-kube-api-access-c2f8t\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.951269 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf1a8b70-3856-486f-9912-a2de1d57c3fb-certs\") pod \"machine-config-server-v65wr\" (UID: \"bf1a8b70-3856-486f-9912-a2de1d57c3fb\") " pod="openshift-machine-config-operator/machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.951454 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvjb\" (UniqueName: \"kubernetes.io/projected/120b38dc-8236-4fa6-a452-642b8ad738ee-kube-api-access-bwvjb\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:40 crc kubenswrapper[3556]: W1128 00:12:40.957131 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec1bae8b_3200_4ad9_b33b_cf8701f3027c.slice/crio-7d6c7f3885de8b542f25f5638f8e5e829fd3a3517e4f4ae711e03dd1a3b19829 WatchSource:0}: Error finding container 7d6c7f3885de8b542f25f5638f8e5e829fd3a3517e4f4ae711e03dd1a3b19829: Status 404 returned error can't find the container with id 7d6c7f3885de8b542f25f5638f8e5e829fd3a3517e4f4ae711e03dd1a3b19829 Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.961192 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxd9\" (UniqueName: \"kubernetes.io/projected/6a23c0ee-5648-448c-b772-83dced2891ce-kube-api-access-gsxd9\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.965579 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fb762d1-812f-43f1-9eac-68034c1ecec7-kube-api-access\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.984188 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45vm\" (UniqueName: \"kubernetes.io/projected/aa90b3c2-febd-4588-a063-7fbbe82f00c1-kube-api-access-v45vm\") pod \"router-default-5c9bf7bc58-6jctv\" (UID: \"aa90b3c2-febd-4588-a063-7fbbe82f00c1\") " pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.994461 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v65wr" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.999709 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.999760 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.999824 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.999865 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:40 crc kubenswrapper[3556]: I1128 00:12:40.999920 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:40.999979 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000000 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000047 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000137 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000160 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000283 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000347 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000388 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000540 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000624 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000737 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000771 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000893 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.000991 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001033 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001079 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001124 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001207 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001285 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001310 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.001330 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.001350 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.001366 3556 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.001425 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.501408781 +0000 UTC m=+23.093640771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001469 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-netns\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001486 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001512 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001524 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001553 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001579 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001598 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001625 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d0dcce3-d96e-48cb-9b9f-362105911589-rootfs\") pod \"machine-config-daemon-zpnhg\" (UID: \"9d0dcce3-d96e-48cb-9b9f-362105911589\") " pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001690 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001695 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-mountpoint-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001745 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001752 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001781 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001787 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-socket-dir-parent\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001819 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b6d14a5-ca00-40c7-af7a-051a98a24eed-host-slash\") pod \"iptables-alerter-wwpnd\" (UID: \"2b6d14a5-ca00-40c7-af7a-051a98a24eed\") " pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001865 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001903 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001925 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001973 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.001994 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002095 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-plugins-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002121 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002143 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002165 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002170 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-kubelet\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002187 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002200 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-multus\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002239 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002265 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002257 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-cnibin\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002283 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002331 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002337 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-system-cni-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002145 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-var-lib-cni-bin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002309 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-etc-kubernetes\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002374 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/cc291782-27d2-4a74-af79-c7dcb31535d2-host-etc-kube\") pod \"network-operator-767c585db5-zd56b\" (UID: \"cc291782-27d2-4a74-af79-c7dcb31535d2\") " pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002363 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-multus-certs\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002408 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-audit-dir\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002410 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-host-run-k8s-cni-cncf-io\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002437 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/41e8708a-e40d-4d28-846b-c52eda4d1755-node-pullsecrets\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002445 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-registration-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002480 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-dir\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002487 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-os-release\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002511 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002513 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002541 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-system-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002556 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002559 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002583 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002598 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002656 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002660 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6a23c0ee-5648-448c-b772-83dced2891ce-hosts-file\") pod \"node-resolver-dn27q\" (UID: \"6a23c0ee-5648-448c-b772-83dced2891ce\") " pod="openshift-dns/node-resolver-dn27q" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002677 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9fb762d1-812f-43f1-9eac-68034c1ecec7-etc-ssl-certs\") pod \"cluster-version-operator-6d5d9649f6-x6d46\" (UID: \"9fb762d1-812f-43f1-9eac-68034c1ecec7\") " pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002678 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-hostroot\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002749 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-cnibin\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002741 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"ovnkube-node-44qcg\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002800 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-socket-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002878 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002928 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-os-release\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002930 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.002989 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-host\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003095 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003160 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003287 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-dir\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003291 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/12e733dd-0939-4f1b-9cbb-13897e093787-csi-data-dir\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003355 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003404 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003481 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-conf-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003602 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003822 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.003990 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/475321a1-8b7e-4033-8f72-b05a8b377347-multus-cni-dir\") pod \"multus-q88th\" (UID: \"475321a1-8b7e-4033-8f72-b05a8b377347\") " pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.020523 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.020553 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.020569 3556 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.020640 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.520618202 +0000 UTC m=+23.112850192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.028990 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-767c585db5-zd56b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.042222 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.042265 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.042281 3556 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.042377 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.54235322 +0000 UTC m=+23.134585210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.049473 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc291782_27d2_4a74_af79_c7dcb31535d2.slice/crio-fd68a30e9d46179619cbfc093241e1c2433a771981e00354c754c815a52dbe3f WatchSource:0}: Error finding container fd68a30e9d46179619cbfc093241e1c2433a771981e00354c754c815a52dbe3f: Status 404 returned error can't find the container with id fd68a30e9d46179619cbfc093241e1c2433a771981e00354c754c815a52dbe3f Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.054430 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-q88th" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.056192 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"ecdcf4f9534cbfb07f30b6d43b275e6759dcb36dc7dd640547bf479214893425"} Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.061889 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.061929 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.061959 3556 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.062064 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.562040571 +0000 UTC m=+23.154272561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.063290 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"7d6c7f3885de8b542f25f5638f8e5e829fd3a3517e4f4ae711e03dd1a3b19829"} Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.086851 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod475321a1_8b7e_4033_8f72_b05a8b377347.slice/crio-c39541f5935349939993cafee79d96d66194a899ffea7e383add232317e81031 WatchSource:0}: Error finding container c39541f5935349939993cafee79d96d66194a899ffea7e383add232317e81031: Status 404 returned error can't find the container with id c39541f5935349939993cafee79d96d66194a899ffea7e383add232317e81031 Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.107905 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbqm\" (UniqueName: \"kubernetes.io/projected/7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8-kube-api-access-bwbqm\") pod \"multus-additional-cni-plugins-bzj2p\" (UID: \"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\") " pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.132578 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svnk\" (UniqueName: \"kubernetes.io/projected/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-kube-api-access-8svnk\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.144996 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.145051 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.145069 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.145141 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.645116678 +0000 UTC m=+23.237348878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.164193 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.164225 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.164238 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.164302 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.664283516 +0000 UTC m=+23.256515506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.189701 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.189805 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.189866 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.189948 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.689929367 +0000 UTC m=+23.282161357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.211043 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jw8\" (UniqueName: \"kubernetes.io/projected/f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e-kube-api-access-d7jw8\") pod \"node-ca-l92hr\" (UID: \"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e\") " pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.213720 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.222629 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.223760 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7xghp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.224536 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.224562 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.224575 3556 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.224641 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:41.724619539 +0000 UTC m=+23.316851589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.226886 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod410cf605_1970_4691_9c95_53fdc123b1f3.slice/crio-bd11d7518c09c8ced1d09379f57294541c46ab1b26c892ed97e07148cf4775fd WatchSource:0}: Error finding container bd11d7518c09c8ced1d09379f57294541c46ab1b26c892ed97e07148cf4775fd: Status 404 returned error can't find the container with id bd11d7518c09c8ced1d09379f57294541c46ab1b26c892ed97e07148cf4775fd Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.241143 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l92hr" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.242364 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.245483 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-wwpnd" Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.246127 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51a02bbf_2d40_4f84_868a_d399ea18a846.slice/crio-fad499579afeaf1cb2be9003c4650e299d2748125f1d35735a2881e6e9d429a4 WatchSource:0}: Error finding container fad499579afeaf1cb2be9003c4650e299d2748125f1d35735a2881e6e9d429a4: Status 404 returned error can't find the container with id fad499579afeaf1cb2be9003c4650e299d2748125f1d35735a2881e6e9d429a4 Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.247699 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.256127 3556 kubelet.go:1935] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.260806 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.269304 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.277888 3556 kubelet.go:1935] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.281247 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dn27q" Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.281933 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8175ef1_0983_4bfe_a64e_fc6f5c5f7d2e.slice/crio-ce69643f99fedf1e1bd07e483d2440aa6768e844787113728fa7ad469a3c9eab WatchSource:0}: Error finding container ce69643f99fedf1e1bd07e483d2440aa6768e844787113728fa7ad469a3c9eab: Status 404 returned error can't find the container with id ce69643f99fedf1e1bd07e483d2440aa6768e844787113728fa7ad469a3c9eab Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.282573 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e19f9e8_9a37_4ca8_9790_c219750ab482.slice/crio-c199f4314aadffe223449b70c532061a711b719d9eb0c631901269df2d2fa349 WatchSource:0}: Error finding container c199f4314aadffe223449b70c532061a711b719d9eb0c631901269df2d2fa349: Status 404 returned error can't find the container with id c199f4314aadffe223449b70c532061a711b719d9eb0c631901269df2d2fa349 Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.286083 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.297387 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0453d24-e872-43af-9e7a-86227c26d200\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-9-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.307860 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b6d14a5_ca00_40c7_af7a_051a98a24eed.slice/crio-bd260b0c415648e49becb720eeeb8b7911d5b3436ea193ead89a5a891f25b5b6 WatchSource:0}: Error finding container bd260b0c415648e49becb720eeeb8b7911d5b3436ea193ead89a5a891f25b5b6: Status 404 returned error can't find the container with id bd260b0c415648e49becb720eeeb8b7911d5b3436ea193ead89a5a891f25b5b6 Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.312286 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa90b3c2_febd_4588_a063_7fbbe82f00c1.slice/crio-18ab4e836e15c1d4e2229c8d650cc972f2832ec39cb11977221777289214aa78 WatchSource:0}: Error finding container 18ab4e836e15c1d4e2229c8d650cc972f2832ec39cb11977221777289214aa78: Status 404 returned error can't find the container with id 18ab4e836e15c1d4e2229c8d650cc972f2832ec39cb11977221777289214aa78 Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.322174 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d0dcce3_d96e_48cb_9b9f_362105911589.slice/crio-aea9321b9a162efd3336d44f71f8656d903cb1da63600b4ad9351b1ec6d8e0dd WatchSource:0}: Error finding container aea9321b9a162efd3336d44f71f8656d903cb1da63600b4ad9351b1ec6d8e0dd: Status 404 returned error can't find the container with id aea9321b9a162efd3336d44f71f8656d903cb1da63600b4ad9351b1ec6d8e0dd Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.339183 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/console-644bb77b49-5x5xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [console]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdb162caf10c0d078bc6c1001f448c5d011a2c70bd2d30100bf6e3b5340e8cae\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-644bb77b49-5x5xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.376666 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a23c0ee_5648_448c_b772_83dced2891ce.slice/crio-abe467509fef518f408b4d179e01a3ec4beebf8784ce126b333663186fcb43e2 WatchSource:0}: Error finding container abe467509fef518f408b4d179e01a3ec4beebf8784ce126b333663186fcb43e2: Status 404 returned error can't find the container with id abe467509fef518f408b4d179e01a3ec4beebf8784ce126b333663186fcb43e2 Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.377720 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/installer-10-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79050916-d488-4806-b556-1b0078b31e53\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"installer-10-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: W1128 00:12:41.409940 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb762d1_812f_43f1_9eac_68034c1ecec7.slice/crio-5f8be7695c5587eb355abad479754a3cce99e4133e8df1c3fb4487a06e82c2f4 WatchSource:0}: Error finding container 5f8be7695c5587eb355abad479754a3cce99e4133e8df1c3fb4487a06e82c2f4: Status 404 returned error can't find the container with id 5f8be7695c5587eb355abad479754a3cce99e4133e8df1c3fb4487a06e82c2f4 Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.420728 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1252c975e7e2b2f2f1e4a547ca59f1b5af16b1d6dc5b2aa2efdd99f9edc47a75\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c4363bf35c3850ea69697df9035284b39acfc987f5b168c9bf3f20002f44039\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:00:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:06Z\\\"}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-7cc7ff75d5-g9qv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.420803 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.420856 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.420887 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.420921 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.420950 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.420976 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.420990 3556 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421060 3556 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421083 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421069281 +0000 UTC m=+24.013301261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421098 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421114 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421003 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421126 3556 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421147 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421118682 +0000 UTC m=+24.013350842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421156 3556 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421167 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421160103 +0000 UTC m=+24.013392083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421168 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421174 3556 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421185 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421193 3556 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421212 3556 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421216 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421206204 +0000 UTC m=+24.013438194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421218 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421267 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421229145 +0000 UTC m=+24.013461135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421285 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421277146 +0000 UTC m=+24.013509346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421288 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421285 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421341 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421346 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421366 3556 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421388 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421412 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421420 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421450 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421461 3556 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421482 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421437459 +0000 UTC m=+24.013669439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421531 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421546 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421554 3556 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421569 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421571 3556 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421599 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421572732 +0000 UTC m=+24.013804722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421621 3556 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421632 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421641 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421609183 +0000 UTC m=+24.013841173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421660 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421653124 +0000 UTC m=+24.013885114 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421667 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421684 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421693 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421712 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421721 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421712715 +0000 UTC m=+24.013944925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421744 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421755 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421745456 +0000 UTC m=+24.013977446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421770 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421796 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421799 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421791927 +0000 UTC m=+24.014024147 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421832 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421859 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421872 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421880 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421886 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421891 3556 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421929 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42192087 +0000 UTC m=+24.014152860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421937 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421947 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.421968 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.421978 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.421968701 +0000 UTC m=+24.014200941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422042 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422072 3556 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422076 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422068464 +0000 UTC m=+24.014300444 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422147 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422136646 +0000 UTC m=+24.014368636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422043 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422147 3556 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422162 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422155827 +0000 UTC m=+24.014387817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422210 3556 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422235 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422244 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422236738 +0000 UTC m=+24.014468728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422265 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422292 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422332 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422336 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422382 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422395 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422384452 +0000 UTC m=+24.014616782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422428 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422437 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422479 3556 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422452 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422442433 +0000 UTC m=+24.014674633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422508 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422499234 +0000 UTC m=+24.014731224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422538 3556 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422545 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422536275 +0000 UTC m=+24.014768265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422570 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422580 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422601 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422594637 +0000 UTC m=+24.014826627 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422621 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422652 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422661 3556 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422674 3556 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422683 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422705 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422709 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422703039 +0000 UTC m=+24.014935029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422736 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422769 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422794 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422787541 +0000 UTC m=+24.015019531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422796 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422836 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422877 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422854412 +0000 UTC m=+24.015086402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422891 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422885943 +0000 UTC m=+24.015117933 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422836 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422899 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422935 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.422951 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.422941664 +0000 UTC m=+24.015173654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.422974 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423001 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423048 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423056 3556 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423081 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423073027 +0000 UTC m=+24.015305017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423071 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423134 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423156 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423157 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423151349 +0000 UTC m=+24.015383339 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423195 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423205 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423212 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423217 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423232 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423226251 +0000 UTC m=+24.015458241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423269 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423294 3556 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423314 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423327 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423318293 +0000 UTC m=+24.015550283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423354 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423381 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423372944 +0000 UTC m=+24.015604934 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423425 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423458 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423470 3556 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423561 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423583 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42357373 +0000 UTC m=+24.015805720 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423608 3556 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423621 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423601 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42359416 +0000 UTC m=+24.015826150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423659 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423650681 +0000 UTC m=+24.015882671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423676 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423668502 +0000 UTC m=+24.015900492 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423677 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423689 3556 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423710 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423699483 +0000 UTC m=+24.015931683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423728 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423719783 +0000 UTC m=+24.015952033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423519 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423746 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423765 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423773 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423764594 +0000 UTC m=+24.015996584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423795 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423823 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423821 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423812765 +0000 UTC m=+24.016044955 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423853 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423894 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423928 3556 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423937 3556 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423982 3556 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423998 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423984 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.423974229 +0000 UTC m=+24.016206219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423955 3556 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.423970 3556 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.423930 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424066 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42404652 +0000 UTC m=+24.016278510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424092 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424082211 +0000 UTC m=+24.016314201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424111 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424101702 +0000 UTC m=+24.016333692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424130 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424121762 +0000 UTC m=+24.016353752 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424158 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424139433 +0000 UTC m=+24.016371423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424187 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424221 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424251 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424290 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424318 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424342 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424367 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424393 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424418 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424441 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424428 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424463 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424487 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424510 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424501091 +0000 UTC m=+24.016733081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424513 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424546 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424532882 +0000 UTC m=+24.016764872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424567 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424596 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424589903 +0000 UTC m=+24.016821893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424620 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424644 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424688 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424718 3556 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424748 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424739756 +0000 UTC m=+24.016971746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424694 3556 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424776 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424793 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424784937 +0000 UTC m=+24.017016917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424821 3556 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424824 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424853 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.424840049 +0000 UTC m=+24.017072039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424879 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424884 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424906 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424912 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42490316 +0000 UTC m=+24.017135150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424940 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424951 3556 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.424981 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424993 3556 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425032 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425024834 +0000 UTC m=+24.017256824 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.424392 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425042 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425061 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425055944 +0000 UTC m=+24.017287934 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425096 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425122 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425149 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425117196 +0000 UTC m=+24.017349186 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425170 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425188 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425194 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425188387 +0000 UTC m=+24.017420597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425211 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425224 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425216908 +0000 UTC m=+24.017448888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425231 3556 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425250 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425256 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425249879 +0000 UTC m=+24.017481869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425253 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425272 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425264439 +0000 UTC m=+24.017496429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425293 3556 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425302 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42529502 +0000 UTC m=+24.017527010 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425302 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425319 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42531335 +0000 UTC m=+24.017545340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425335 3556 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425345 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425362 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425355771 +0000 UTC m=+24.017587761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425369 3556 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425384 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425390 3556 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425398 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425389892 +0000 UTC m=+24.017621882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425423 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425429 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425447 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425454 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425447093 +0000 UTC m=+24.017679083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425455 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425449 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425475 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425465414 +0000 UTC m=+24.017697404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425483 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425495 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425487464 +0000 UTC m=+24.017719454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425499 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425510 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425504915 +0000 UTC m=+24.017736905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425527 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425521295 +0000 UTC m=+24.017753285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425097 3556 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425547 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425563 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425154 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425566 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425560206 +0000 UTC m=+24.017792196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425612 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425624 3556 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425567 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425712 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425754 3556 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425586 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425787 3556 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425630 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425599847 +0000 UTC m=+24.017831837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425848 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425836582 +0000 UTC m=+24.018068562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425870 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425928 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425936 3556 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.425955 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.425981 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425964475 +0000 UTC m=+24.018196465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.426045 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.425998986 +0000 UTC m=+24.018230976 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.426071 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.426055687 +0000 UTC m=+24.018287677 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.426082 3556 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.426086 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.426079578 +0000 UTC m=+24.018311568 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.427051 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42703892 +0000 UTC m=+24.019270910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.427107 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.427099682 +0000 UTC m=+24.019331672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427167 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427198 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427251 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427287 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427336 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427362 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427412 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427437 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427461 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427529 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427562 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427613 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427649 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427701 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427726 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427779 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427840 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427868 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427915 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427953 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.427998 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.428044 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428092 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428115 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428125 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428137 3556 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.426133 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428221 3556 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428265 3556 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428275 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428298 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428330 3556 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428433 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428448 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428456 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428510 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428527 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428535 3556 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428592 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428607 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428617 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428650 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428710 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428772 3556 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.428095 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428712 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428821 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428831 3556 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428856 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428882 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428820 3556 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428952 3556 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428966 3556 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428986 3556 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428996 3556 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429029 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428160 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.428151277 +0000 UTC m=+24.020383267 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429063 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429055297 +0000 UTC m=+24.021287287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429078 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429071438 +0000 UTC m=+24.021303428 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429097 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429108 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429118 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429088438 +0000 UTC m=+24.021320418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429132 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42912668 +0000 UTC m=+24.021358670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429145 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429138 +0000 UTC m=+24.021369990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429157 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.42915247 +0000 UTC m=+24.021384460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429165 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429177 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429184 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429171 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429163561 +0000 UTC m=+24.021395541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429207 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429200692 +0000 UTC m=+24.021432672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429224 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429217352 +0000 UTC m=+24.021449342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429240 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429234482 +0000 UTC m=+24.021466472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429274 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429247153 +0000 UTC m=+24.021479143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429290 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429281823 +0000 UTC m=+24.021513813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429302 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429295854 +0000 UTC m=+24.021527844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429118 3556 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.429330 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428437 3556 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429392 3556 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429346 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429336845 +0000 UTC m=+24.021568835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.428607 3556 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429428 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.429445 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429467 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429447467 +0000 UTC m=+24.021679457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.429499 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429529 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.429534 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429559 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429551579 +0000 UTC m=+24.021783569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429584 3556 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429613 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429605981 +0000 UTC m=+24.021837971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429634 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429628481 +0000 UTC m=+24.021860471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429647 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429641502 +0000 UTC m=+24.021873492 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429653 3556 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429702 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429670 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429656612 +0000 UTC m=+24.021888592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.429604 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429727 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429718783 +0000 UTC m=+24.021950773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.429769 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429772 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429764674 +0000 UTC m=+24.021996664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429796 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429788815 +0000 UTC m=+24.022020805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.429801 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429815 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429803635 +0000 UTC m=+24.022035625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429841 3556 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429853 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429845316 +0000 UTC m=+24.022077306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429868 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429862257 +0000 UTC m=+24.022094247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429884 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429876937 +0000 UTC m=+24.022108917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429911 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429902377 +0000 UTC m=+24.022134357 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429924 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429918758 +0000 UTC m=+24.022150738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429930 3556 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429938 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429931038 +0000 UTC m=+24.022163028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.429960 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.429953449 +0000 UTC m=+24.022185659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430005 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430057 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430151 3556 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430188 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.430181334 +0000 UTC m=+24.022413564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430211 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430153 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430268 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430306 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430235 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430324 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.430295686 +0000 UTC m=+24.022527676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430355 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430362 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.430353988 +0000 UTC m=+24.022585978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430383 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.430376078 +0000 UTC m=+24.022608058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430389 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430415 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.430409469 +0000 UTC m=+24.022641459 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430427 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430486 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430512 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.430555 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430706 3556 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430744 3556 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430779 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430746 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.430739187 +0000 UTC m=+24.022971177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430804 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430815 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.430808789 +0000 UTC m=+24.023040779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430826 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.430821039 +0000 UTC m=+24.023053029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.430840 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.43083364 +0000 UTC m=+24.023065630 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.456744 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410cf605-1970-4691-9c95-53fdc123b1f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:528874097a1d537796a103d2482d59cbd1a4d75aebe63f802a74e22cedaa1009\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-77c846df58-6l97b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.499997 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2b2e2d762c8b359ec567ae879d9fedbdd2fb02f477f190f4465a6d6279b220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:01:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:59:16Z\\\"}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-6f6cb54958-rbddb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.531861 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.531952 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.532007 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.532099 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532168 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532207 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532221 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532283 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.532266815 +0000 UTC m=+24.124498805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.532209 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532316 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532332 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532343 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532422 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532436 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532442 3556 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532499 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532496 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532539 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.532522081 +0000 UTC m=+24.124754071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532512 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532557 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.532550951 +0000 UTC m=+24.124782941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532564 3556 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532539 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532614 3556 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.532620 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532645 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.532620193 +0000 UTC m=+24.124852183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532673 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532686 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532694 3556 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532720 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.532714215 +0000 UTC m=+24.124946205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.532800 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532808 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.532792928 +0000 UTC m=+24.125025188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532877 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532891 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532899 3556 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532929 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.532921041 +0000 UTC m=+24.125153031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.532900 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532951 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532964 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.532971 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.532973 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533100 3556 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533113 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533144 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.533134406 +0000 UTC m=+24.125366596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.533477 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.533550 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533619 3556 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533632 3556 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533641 3556 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533657 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533655 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.533577446 +0000 UTC m=+24.125809436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533674 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.533765 3556 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.534823 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.534803155 +0000 UTC m=+24.127035145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.534858 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.534846896 +0000 UTC m=+24.127078886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.540563 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4006587f6315522f104e61b48def4e51bacb5af9088fb533e3cbce958a7a26a2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cacbc14e2522c21376a7d66a61a079d962c7b38a2d0f39522c7854c8ae5956a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T20:05:17Z\\\",\\\"message\\\":\\\"] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:36.668906 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nW0813 20:04:50.884304 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.FeatureGate: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:50.918193 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: failed to list *v1.FeatureGate: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/featuregates?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nW0813 20:04:52.839119 1 reflector.go:539] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nE0813 20:04:52.839544 1 reflector.go:147] github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.ClusterVersion: failed to list *v1.ClusterVersion: Get \\\\\\\"https://10.217.4.1:443/apis/config.openshift.io/v1/clusterversions?limit=500\\\\u0026resourceVersion=0\\\\\\\": dial tcp 10.217.4.1:443: connect: connection refused\\\\nF0813 20:05:17.755149 1 main.go:175] timed out waiting for FeatureGate detection\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T20:04:16Z\\\"}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-649bd778b4-tt5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.579711 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec1bae8b-3200-4ad9-b33b-cf8701f3027c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f0ea9c4dd64fcb95e7d523331e9e46cf36132427af07bd759cbd1837eaf903\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e95a421ea1d60cbffa4781a464aee3e316ed5550dd6c294388a2166b7737ad2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9064bff19516de0a9d20107cafe26bbdf325661decdde8161f7c85fc7cf205e4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-08-13T20:05:09Z\\\",\\\"message\\\":\\\"ck openshift-cluster-machine-approver/cluster-machine-approver-leader: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nE0813 20:04:17.937199 1 leaderelection.go:332] error retrieving resource lock openshift-cluster-machine-approver/cluster-machine-approver-leader: Get \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nI0813 20:04:38.936003 1 leaderelection.go:285] failed to renew lease openshift-cluster-machine-approver/cluster-machine-approver-leader: timed out waiting for the condition\\\\nE0813 20:05:08.957257 1 leaderelection.go:308] Failed to release lock: Put \\\\\\\"https://10.217.4.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader\\\\\\\": dial tcp 10.217.4.1:443: i/o timeout\\\\nF0813 20:05:08.990431 1 main.go:235] unable to run the manager: leader election lost\\\\nI0813 20:05:09.028498 1 internal.go:516] \\\\\\\"Stopping and waiting for non leader election runnables\\\\\\\"\\\\nI0813 20:05:09.028591 1 internal.go:520] \\\\\\\"Stopping and waiting for leader election runnables\\\\\\\"\\\\nI0813 20:05:09.028608 1 internal.go:526] \\\\\\\"Stopping and waiting for caches\\\\\\\"\\\\nI0813 20:05:09.028585 1 recorder.go:104] \\\\\\\"crc_998ad275-6fd6-49e7-a1d3-0d4cd7031028 stopped leading\\\\\\\" logger=\\\\\\\"events\\\\\\\" type=\\\\\\\"Normal\\\\\\\" object={\\\\\\\"kind\\\\\\\":\\\\\\\"Lease\\\\\\\",\\\\\\\"namespace\\\\\\\":\\\\\\\"openshift-cluster-machine-approver\\\\\\\",\\\\\\\"name\\\\\\\":\\\\\\\"cluster-machine-approver-leader\\\\\\\",\\\\\\\"uid\\\\\\\":\\\\\\\"396b5b52-acf2-4d11-8e98-69ecff2f52d0\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"coordination.k8s.io/v1\\\\\\\",\\\\\\\"resourceVersion\\\\\\\":\\\\\\\"30699\\\\\\\"} reason=\\\\\\\"LeaderElection\\\\\\\"\\\\nI0813 20:05:09.028819 1 internal.go:530] \\\\\\\"Stopping and waiting for webhooks\\\\\\\"\\\\nI0813 20:05:09.028849 1 internal.go:533] \\\\\\\"Stopping and waiting for HTTP servers\\\\\\\"\\\\nI0813 20:05:09.028884 1 internal.go:537] \\\\\\\"Wait completed, proceeding to shutdown the manager\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-08-13T19:50:45Z\\\"}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-7874c8775-kh4j9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.616405 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/revision-pruner-8-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72854c1e-5ae2-4ed6-9e50-ff3bccde2635\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"revision-pruner-8-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.637726 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.637926 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.637955 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.637969 3556 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.638045 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.638029803 +0000 UTC m=+24.230261793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.638104 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.638309 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.638331 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.638343 3556 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.638395 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.638380821 +0000 UTC m=+24.230612811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.658253 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"deaee4f4-7b7a-442d-99b7-c8ac62ef5f27\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29251920-wcws2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.702071 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"663515de-9ac9-4c55-8755-a591a2de3714\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:19Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:19Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8167d39fa1d07b9565cc04c1789413635b39d3825d42d9474a4f501c4908f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1ce9342b0ceac619a262bd0894094be1e318f913e06ed5392b9e45dfc973791\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":7,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T00:12:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://00c4fd2ed360e13891c41dd4a8e389d89e9453542b13dde1c17f926f7ba2d74c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fc7ec6aed1866d8244774ffcad733f9395679f49cc12580f349b9c47358f842\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T00:12:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://94343bc4605fa1eac03de87ea69d17b924155ee0800e855ad538b485fc3c606d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T00:12:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a42f5a37e78b02bf0d93bcaf01da23eb2c4966060b4ade3d1b6b3e26db97d268\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:bca1f698f7f613e9e8f2626aacc55323c6a5bd50ca26c920a042e5b8c9ab9c0f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T00:12:23Z\\\"}}}],\\\"startTime\\\":\\\"2025-11-28T00:12:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.737606 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-console/downloads-65476884b9-9wcvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6268b7fe-8910-4505-b404-6f1df638105c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f58cff96550345ff1cbd0c3df73e478f38310996ac8a0a77006b25cc2e3351f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7878320974e3985f5732deb5170463e1dafc9265287376679a29ea7923e84c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T20:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T20:03:51Z\\\"}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-65476884b9-9wcvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.739664 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.739740 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.739786 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.739873 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.740212 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.740269 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.740285 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.739895 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.740932 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.740977 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.739960 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.741130 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.741150 3556 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.740100 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.741229 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.741241 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.740565 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.740535023 +0000 UTC m=+24.332767013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.741413 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.741376643 +0000 UTC m=+24.333608633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.741459 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.741448505 +0000 UTC m=+24.333680495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.741483 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:42.741472385 +0000 UTC m=+24.333704375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.780579 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:83a060571691f85f6019ba7983d8d2f41b1845e371316ab2d0016226a9f111ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bzj2p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.819997 3556 status_manager.go:877] "Failed to update status for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa90b3c2-febd-4588-a063-7fbbe82f00c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T00:12:40Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c593fc9fefc335235a7118c3b526f9f265397b62293169959e09a693033db15\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b6b2db3637481270955ecfaf63f08f80ee970eeaa15bd54430df884620e38ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-08-13T19:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-08-13T19:56:16Z\\\"}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}}}]}}\" for pod \"openshift-ingress\"/\"router-default-5c9bf7bc58-6jctv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.912665 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.912854 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.912916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.912986 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.913065 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.913147 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.913201 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.913279 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.913317 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.913375 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.913416 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.913487 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.913542 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.913614 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.913661 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.913751 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.913794 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.913872 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.913920 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.913984 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.914046 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.914115 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.914158 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.914230 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.914272 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.914330 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.914373 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:41 crc kubenswrapper[3556]: E1128 00:12:41.914430 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.969940 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 00:12:41 crc kubenswrapper[3556]: I1128 00:12:41.992897 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.067588 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"d0c87768e0e0fd45ce95de95b3cd35d3fb1db912a15fc88802ee49f35bfc9f47"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.067646 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"5825caecff59ec411acfa2888077a9dd43f86687eece88fb8f014b10c1a3740e"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.067661 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"aea9321b9a162efd3336d44f71f8656d903cb1da63600b4ad9351b1ec6d8e0dd"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.072783 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="324b84ed928c7beff552526b8bb7cec0379a0ef0d4d85002e36651b6da681716" exitCode=0 Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.072828 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"324b84ed928c7beff552526b8bb7cec0379a0ef0d4d85002e36651b6da681716"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.072892 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"c199f4314aadffe223449b70c532061a711b719d9eb0c631901269df2d2fa349"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.074898 3556 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="349215d30730687352d77f6cea51fb28617ecdef511341ab4d8b5c93ac6772a1" exitCode=0 Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.074998 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"349215d30730687352d77f6cea51fb28617ecdef511341ab4d8b5c93ac6772a1"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.075051 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"2b74c6ca726b110fe4e0bb02029b38410e5b1b0109a71f4c1a71f02ae72b690b"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.078404 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"26ea99a990c8b29e8794df03ad0ad41b98f38cf49bbad1e53ff53371275f3629"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.078453 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"18ab4e836e15c1d4e2229c8d650cc972f2832ec39cb11977221777289214aa78"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.079895 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"bd260b0c415648e49becb720eeeb8b7911d5b3436ea193ead89a5a891f25b5b6"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.083748 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v65wr" event={"ID":"bf1a8b70-3856-486f-9912-a2de1d57c3fb","Type":"ContainerStarted","Data":"320c4e7272c994f20b74a0af1a51bd3982e9881b30b6438600984b08c46d7eb7"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.095668 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"f018e3bd651140cf5449d8f1942a49b8d0fa8300de204f63155922c3075ae4cb"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.095729 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dn27q" event={"ID":"6a23c0ee-5648-448c-b772-83dced2891ce","Type":"ContainerStarted","Data":"abe467509fef518f408b4d179e01a3ec4beebf8784ce126b333663186fcb43e2"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.134183 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"042cdc3a9c13859b73154aae42fd7489d494a3f9d472c75262210e290b2e5727"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.134695 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"bd11d7518c09c8ced1d09379f57294541c46ab1b26c892ed97e07148cf4775fd"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.140785 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"314ec772307939ce678608e78ac7499b978e65bb573b20130e4bd9e02d91ae1e"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.140847 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7874c8775-kh4j9" event={"ID":"ec1bae8b-3200-4ad9-b33b-cf8701f3027c","Type":"ContainerStarted","Data":"f2921a610cfcbf91ed301b8bcfdb93b9126dad820e8f4401f2811da2a0f7c30b"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.146249 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"40ba42e1bbe8c14e8028b7ea05590b7d47a413d81e22020121d8ec5a836dfd80"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.146338 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-767c585db5-zd56b" event={"ID":"cc291782-27d2-4a74-af79-c7dcb31535d2","Type":"ContainerStarted","Data":"fd68a30e9d46179619cbfc093241e1c2433a771981e00354c754c815a52dbe3f"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.148692 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"a9281257ec2216e7460279a4e2b3a33f3c1bf969d5ae4ec655ed2f25d0de223c"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.148737 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-6d5d9649f6-x6d46" event={"ID":"9fb762d1-812f-43f1-9eac-68034c1ecec7","Type":"ContainerStarted","Data":"5f8be7695c5587eb355abad479754a3cce99e4133e8df1c3fb4487a06e82c2f4"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.150603 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"6e48d427ed2b5ca2c86082810b5594169678d94b73922fdf6c408e4bbe775561"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.150645 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"c39541f5935349939993cafee79d96d66194a899ffea7e383add232317e81031"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.155360 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"e99f75d7ee6cf9dfcc0107591593930b140ffcc2318d12ab2d486f1e952e4602"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.155392 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"dfe7d42842080ffa34ae99292cf9ccac94eb91cef5bcf253177564cac596bd10"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.155405 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7xghp" event={"ID":"51a02bbf-2d40-4f84-868a-d399ea18a846","Type":"ContainerStarted","Data":"fad499579afeaf1cb2be9003c4650e299d2748125f1d35735a2881e6e9d429a4"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.157042 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"c04a5692e22787c045104695fe8a60dcae1f346f0f70814e3eaacd387da05d25"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.157064 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l92hr" event={"ID":"f8175ef1-0983-4bfe-a64e-fc6f5c5f7d2e","Type":"ContainerStarted","Data":"ce69643f99fedf1e1bd07e483d2440aa6768e844787113728fa7ad469a3c9eab"} Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.261933 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.267295 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:42 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:42 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:42 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.267427 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469553 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469631 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469665 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469695 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469731 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469776 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469817 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469860 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469891 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469930 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469958 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.469990 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470042 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470080 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470110 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470145 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470179 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470207 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470248 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470278 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470305 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470333 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470388 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470415 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470545 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470584 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470614 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470645 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470672 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470706 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470740 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470775 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470805 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470834 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470864 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470899 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470928 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.470963 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471004 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471055 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471089 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471162 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471193 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471225 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471252 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471286 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471314 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471343 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471376 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471408 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471448 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471479 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471521 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471550 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471592 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471622 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471652 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471682 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471722 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471753 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471782 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471815 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471845 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471875 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.471971 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472026 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472060 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472098 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472130 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472161 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472197 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472228 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472259 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472288 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472319 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472350 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472380 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472411 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472443 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472472 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.472471 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472502 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.472518 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472536 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472570 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472602 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472633 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472676 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472706 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472739 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472769 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472801 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472831 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472870 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472904 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.472920 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.472903107 +0000 UTC m=+26.065135097 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.472996 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.473055 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.473082 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473088 3556 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.473124 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.473156 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473173 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473147292 +0000 UTC m=+26.065379482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.473208 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.473255 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473263 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473275 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473284 3556 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.473290 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473315 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473306936 +0000 UTC m=+26.065538916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473377 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473401 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473395128 +0000 UTC m=+26.065627118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473414 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473443 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473451 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473441469 +0000 UTC m=+26.065673469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473455 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473478 3556 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473504 3556 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473511 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47350208 +0000 UTC m=+26.065734080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473532 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473526661 +0000 UTC m=+26.065758651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473559 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473577 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473587 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473578672 +0000 UTC m=+26.065810672 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473610 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473601652 +0000 UTC m=+26.065833862 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473639 3556 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473649 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473667 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473659945 +0000 UTC m=+26.065891945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473687 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473679585 +0000 UTC m=+26.065911585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473700 3556 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473725 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473718986 +0000 UTC m=+26.065950976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473731 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473765 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473777 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473758397 +0000 UTC m=+26.065990617 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473807 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473810 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473796238 +0000 UTC m=+26.066028428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473835 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473825119 +0000 UTC m=+26.066057109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473877 3556 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473895 3556 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473916 3556 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473926 3556 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473930 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473898 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47389236 +0000 UTC m=+26.066124350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473979 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473963382 +0000 UTC m=+26.066195622 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474037 3556 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474086 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474130 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474130 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.473994692 +0000 UTC m=+26.066226922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474161 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474153036 +0000 UTC m=+26.066385016 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474172 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474166776 +0000 UTC m=+26.066398766 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474185 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474179007 +0000 UTC m=+26.066410997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474217 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474244 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474235538 +0000 UTC m=+26.066467528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474280 3556 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474301 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474295309 +0000 UTC m=+26.066527299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474304 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474339 3556 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474344 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47433373 +0000 UTC m=+26.066565730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474380 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474371741 +0000 UTC m=+26.066603961 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474407 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474417 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474428 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474440 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474433742 +0000 UTC m=+26.066665722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474443 3556 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474476 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474469593 +0000 UTC m=+26.066701573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474509 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474530 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474542 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474563 3556 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474577 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474511 3556 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474577 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474568195 +0000 UTC m=+26.066800195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474622 3556 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474633 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474624007 +0000 UTC m=+26.066856237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474652 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474642007 +0000 UTC m=+26.066874227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474669 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474660557 +0000 UTC m=+26.066892797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474675 3556 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474702 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474694208 +0000 UTC m=+26.066926208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474706 3556 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474741 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474733039 +0000 UTC m=+26.066965039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474744 3556 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474777 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47476931 +0000 UTC m=+26.067001310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474784 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474796 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474806 3556 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474828 3556 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474831 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474825331 +0000 UTC m=+26.067057321 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474863 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474855802 +0000 UTC m=+26.067087802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474877 3556 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474888 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474907 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474910 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474903013 +0000 UTC m=+26.067135003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474944 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474936324 +0000 UTC m=+26.067168324 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474944 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474974 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474979 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474969354 +0000 UTC m=+26.067201354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.474997 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.474990345 +0000 UTC m=+26.067222335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475043 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475080 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475071548 +0000 UTC m=+26.067303548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475048 3556 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475118 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475133 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475137 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475130139 +0000 UTC m=+26.067362139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475142 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475172 3556 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475176 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47516928 +0000 UTC m=+26.067401280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475205 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475198981 +0000 UTC m=+26.067430971 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475218 3556 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475239 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475251 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475242152 +0000 UTC m=+26.067474152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475271 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475262672 +0000 UTC m=+26.067494882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475288 3556 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475314 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475306923 +0000 UTC m=+26.067538923 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475321 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475335 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475342 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475357 3556 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475368 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475361364 +0000 UTC m=+26.067593354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475392 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475383115 +0000 UTC m=+26.067615325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475415 3556 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475442 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475457 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475464 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475442 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475434916 +0000 UTC m=+26.067666916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475481 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475492 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475486087 +0000 UTC m=+26.067718077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475512 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475503878 +0000 UTC m=+26.067735878 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475564 3556 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475575 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475591 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475583879 +0000 UTC m=+26.067815879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475611 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47560161 +0000 UTC m=+26.067833610 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475624 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475636 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475643 3556 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475649 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475666 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475658461 +0000 UTC m=+26.067890441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475681 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475675002 +0000 UTC m=+26.067906992 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475708 3556 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475730 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475723203 +0000 UTC m=+26.067955413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475764 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475788 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475782074 +0000 UTC m=+26.068014064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475818 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475837 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475831775 +0000 UTC m=+26.068063995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475868 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475889 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475883566 +0000 UTC m=+26.068115556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475922 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475943 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475934477 +0000 UTC m=+26.068166467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475977 3556 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.475998 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.475993469 +0000 UTC m=+26.068225459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476037 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476087 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.476071341 +0000 UTC m=+26.068303341 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476163 3556 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476240 3556 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476182 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476307 3556 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476241 3556 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476340 3556 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476283 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.476272485 +0000 UTC m=+26.068504485 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476373 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476417 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476413 3556 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476480 3556 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476490 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476499 3556 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476504 3556 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476538 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476536 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476573 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476582 3556 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476586 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476596 3556 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476600 3556 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476602 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476612 3556 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476624 3556 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476635 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476634 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473985 3556 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476658 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476664 3556 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476672 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476511 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476694 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.473210 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476388 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476652 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476724 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476782 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476812 3556 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476419 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.476385338 +0000 UTC m=+26.068617368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476758 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476327 3556 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476374 3556 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476888 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.476840519 +0000 UTC m=+26.069072519 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476378 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476917 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47690489 +0000 UTC m=+26.069136890 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476430 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476966 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.476956802 +0000 UTC m=+26.069188802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476975 3556 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476984 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.476975942 +0000 UTC m=+26.069207942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476428 3556 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477028 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.476995342 +0000 UTC m=+26.069227342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476445 3556 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.477067 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477070 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477056134 +0000 UTC m=+26.069288124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477117 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477133 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477123425 +0000 UTC m=+26.069355425 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.477119 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477154 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477144296 +0000 UTC m=+26.069376296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476458 3556 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477178 3556 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477196 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477187647 +0000 UTC m=+26.069419637 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476284 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.472539 3556 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477248 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477263 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476498 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.477183 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476607 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476742 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477317 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476614 3556 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477321 3556 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477213 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477207357 +0000 UTC m=+26.069439347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.476454 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477498 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477451453 +0000 UTC m=+26.069683603 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477549 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477527085 +0000 UTC m=+26.069759345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477603 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477586466 +0000 UTC m=+26.069818676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477648 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477629187 +0000 UTC m=+26.069861417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477684 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477665368 +0000 UTC m=+26.069897608 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477737 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477716509 +0000 UTC m=+26.069948669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477782 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.4777637 +0000 UTC m=+26.069995920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477816 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477798961 +0000 UTC m=+26.070031201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477849 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477830912 +0000 UTC m=+26.070063142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477896 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477864103 +0000 UTC m=+26.070096333 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477932 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477913164 +0000 UTC m=+26.070145384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.477965 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477949715 +0000 UTC m=+26.070181975 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478046 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.477991706 +0000 UTC m=+26.070223936 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478095 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478075188 +0000 UTC m=+26.070307418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478142 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478122219 +0000 UTC m=+26.070354389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478185 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47816692 +0000 UTC m=+26.070399150 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478231 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478210691 +0000 UTC m=+26.070442941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478277 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478257152 +0000 UTC m=+26.070489382 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478329 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478307243 +0000 UTC m=+26.070539453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478376 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478358684 +0000 UTC m=+26.070590934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478419 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478402105 +0000 UTC m=+26.070634345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478462 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478441736 +0000 UTC m=+26.070673956 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478512 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478491937 +0000 UTC m=+26.070724147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478557 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478539109 +0000 UTC m=+26.070771329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.478655 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478705 3556 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.478737 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478752 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478740153 +0000 UTC m=+26.070972153 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478772 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478764024 +0000 UTC m=+26.070996024 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478794 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478782574 +0000 UTC m=+26.071014574 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478814 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478802865 +0000 UTC m=+26.071034865 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478833 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478825005 +0000 UTC m=+26.071057005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.478865 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478877 3556 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.478901 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.478933 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478949 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.478928857 +0000 UTC m=+26.071161057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.478993 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479053 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.47904476 +0000 UTC m=+26.071276760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.479052 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479182 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479197 3556 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479231 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.479222615 +0000 UTC m=+26.071454615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479260 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.479240135 +0000 UTC m=+26.071472355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479276 3556 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479305 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.479298177 +0000 UTC m=+26.071530177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.479340 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.479394 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479440 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479457 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479493 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.479484771 +0000 UTC m=+26.071716771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.479549 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479602 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479618 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.479571733 +0000 UTC m=+26.071803763 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.479704 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.479778 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479877 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.479851179 +0000 UTC m=+26.072083319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479959 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.479996 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.479987602 +0000 UTC m=+26.072219592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480083 3556 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.480122 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480173 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.480147926 +0000 UTC m=+26.072380086 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.480248 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480288 3556 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.480339 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480352 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.48033692 +0000 UTC m=+26.072569140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.480414 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.480491 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480549 3556 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.480565 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480650 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.480588906 +0000 UTC m=+26.072821126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480686 3556 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480720 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480758 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.480740231 +0000 UTC m=+26.072972251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480795 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.480779891 +0000 UTC m=+26.073012101 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480810 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480881 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480906 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.480882954 +0000 UTC m=+26.073115104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480908 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.480941 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.481005 3556 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.481037 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.480995046 +0000 UTC m=+26.073227066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.481078 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.481064268 +0000 UTC m=+26.073296528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.581853 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.582456 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.582630 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.583109 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.583144 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.583157 3556 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.583206 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.58318954 +0000 UTC m=+26.175421530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.583332 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.583493 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.583580 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.583630 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.583867 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.584036 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584124 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584146 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584156 3556 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584187 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.584178762 +0000 UTC m=+26.176410752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584323 3556 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584351 3556 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584366 3556 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584523 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584505 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.5844814 +0000 UTC m=+26.176713410 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584592 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584604 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584611 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584666 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584678 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584686 3556 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584743 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584754 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584761 3556 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584782 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.584775137 +0000 UTC m=+26.177007127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584798 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.584791957 +0000 UTC m=+26.177023947 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584856 3556 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584867 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584894 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.584886399 +0000 UTC m=+26.177118389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584545 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584914 3556 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584944 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.584935871 +0000 UTC m=+26.177168121 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584977 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.584997 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585026 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585060 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.585052573 +0000 UTC m=+26.177284563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.585063 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585094 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.585084084 +0000 UTC m=+26.177316064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585132 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585154 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585164 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585193 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.585184036 +0000 UTC m=+26.177416276 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.585257 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585366 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585379 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585386 3556 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.585408 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.585401701 +0000 UTC m=+26.177633691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.686785 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.687327 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.687369 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.687388 3556 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.687472 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.687450041 +0000 UTC m=+26.279682041 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.687673 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.687984 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.688058 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.688077 3556 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.688183 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.688151108 +0000 UTC m=+26.280383108 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.792553 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.793624 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.793734 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.793766 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.793780 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.792947 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.793847 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.793829143 +0000 UTC m=+26.386061133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.793899 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.793925 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794032 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.793995907 +0000 UTC m=+26.386227897 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.794364 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.794539 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794626 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794660 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794663 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794673 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794679 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794688 3556 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794747 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.794713334 +0000 UTC m=+26.386945404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.794772 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:44.794764795 +0000 UTC m=+26.386996995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.912441 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.912629 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.912681 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.912743 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.912779 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.912839 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.912879 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.912936 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.912974 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913048 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913090 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913155 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913190 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913259 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913299 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913373 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913409 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913471 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913509 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913577 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913612 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913682 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913721 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913782 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913817 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913877 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.913913 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.913968 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914002 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914081 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914120 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914174 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914210 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914266 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914303 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914359 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914472 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914509 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914566 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914604 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914676 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914730 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914815 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914851 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.914910 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.914963 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.915041 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.915085 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.915147 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.915190 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.915256 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.915293 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.915351 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.915392 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.915451 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.915496 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.915556 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.915597 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.915654 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.916445 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.916694 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.916787 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.916923 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:42 crc kubenswrapper[3556]: I1128 00:12:42.916964 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:42 crc kubenswrapper[3556]: E1128 00:12:42.917082 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.164132 3556 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="2f5d655504f11f9751d880f03483d2e472554fd36fc1cbf787a16662c690ef97" exitCode=0 Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.164213 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"2f5d655504f11f9751d880f03483d2e472554fd36fc1cbf787a16662c690ef97"} Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.167802 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-77c846df58-6l97b" event={"ID":"410cf605-1970-4691-9c95-53fdc123b1f3","Type":"ContainerStarted","Data":"5ab4767da57fc2e7b72e99dff94713a849b823be938b84a6c7184738d1024cda"} Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.270428 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:43 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:43 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:43 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.270959 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912030 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912137 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.912210 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912272 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912406 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.912583 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912719 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912778 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.912848 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912907 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912900 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.912960 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.912974 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.913066 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.913098 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.913066 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.913200 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.913215 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.914655 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.914770 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:43 crc kubenswrapper[3556]: I1128 00:12:43.914831 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.914867 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.914991 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.915132 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.915253 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.915326 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.915391 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:43 crc kubenswrapper[3556]: E1128 00:12:43.915462 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.178309 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"1f0bc12aff24220a56c1a2424f5c5a776edc66bf8174b52fcc5b43743a6f46d3"} Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.178383 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"2f32e2413540f8b606bace46011915b3f4345f8091da03e50af2414bd037a501"} Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.178400 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"4f2242c62043fe6b5b8237b1f7367052a86f5f4d37ec86376ad68540f41166b6"} Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.178414 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"add4c854492fb92ad3dfe4f839c8b265eb256f8ee4a5541e1ffbd5863baf61ef"} Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.178426 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"f667500e31bbd20e18020f3feda9c5fcb95413c4c60f5ae6b409e073c784b3a5"} Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.182164 3556 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="dc1842893159c6ba3d3079e44e41c12c1aa9ad74b47502396de25f3c31f5d918" exitCode=0 Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.182218 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"dc1842893159c6ba3d3079e44e41c12c1aa9ad74b47502396de25f3c31f5d918"} Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.264677 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:44 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:44 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:44 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.264761 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.558237 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.558696 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.558723 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.558758 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.558786 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.558833 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.558867 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.558424 3556 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.558840 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.558944 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.558919845 +0000 UTC m=+30.151151835 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.558963 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.558967 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.558956476 +0000 UTC m=+30.151188686 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559049 3556 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559080 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559042968 +0000 UTC m=+30.151274958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559110 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559100729 +0000 UTC m=+30.151332709 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559122 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559128 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.55912057 +0000 UTC m=+30.151352560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559168 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559157821 +0000 UTC m=+30.151389901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559274 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559310 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559361 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559426 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559440 3556 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559475 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559476 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559468568 +0000 UTC m=+30.151700558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559610 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559637 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559548 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559577 3556 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559751 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559777 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559788 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559780325 +0000 UTC m=+30.152012315 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559801 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559795785 +0000 UTC m=+30.152027765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559823 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559815516 +0000 UTC m=+30.152047506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559840 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559830667 +0000 UTC m=+30.152062647 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559878 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559905 3556 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559927 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559934 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.559926829 +0000 UTC m=+30.152158819 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559954 3556 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560056 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560081 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.560074702 +0000 UTC m=+30.152306692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.559973 3556 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560130 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.560122133 +0000 UTC m=+30.152354123 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.559998 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560148 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.560141064 +0000 UTC m=+30.152373054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560180 3556 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560192 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560275 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560320 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.560310628 +0000 UTC m=+30.152542618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560325 3556 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560397 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560398 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.560375809 +0000 UTC m=+30.152608009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560471 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560502 3556 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560535 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560538 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560574 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560595 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560625 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.560612135 +0000 UTC m=+30.152844335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560662 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560691 3556 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560712 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560727 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560748 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560752 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560923 3556 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560757 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.560782 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560832 3556 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560863 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561246 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561263 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.560873 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561315 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561328 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561475 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.560873731 +0000 UTC m=+30.153105891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561514 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.561500086 +0000 UTC m=+30.153732076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561537 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.561529446 +0000 UTC m=+30.153761436 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.561564 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.561595 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561632 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.561599378 +0000 UTC m=+30.153831398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561639 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561743 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561674 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.561659369 +0000 UTC m=+30.153891399 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561793 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.561778622 +0000 UTC m=+30.154010652 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561800 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561816 3556 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.561844 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561889 3556 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.561912 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561946 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561946 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.561895065 +0000 UTC m=+30.154127095 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.561985 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.561969796 +0000 UTC m=+30.154201826 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562102 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.562087089 +0000 UTC m=+30.154319119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562157 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.56211573 +0000 UTC m=+30.154347760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.562223 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562287 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.562276373 +0000 UTC m=+30.154508363 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562379 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562542 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562558 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.562652 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562696 3556 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562733 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.562723225 +0000 UTC m=+30.154955215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562766 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.562748845 +0000 UTC m=+30.154980875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.562860 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.562964 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563053 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.563038902 +0000 UTC m=+30.155270922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.563152 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563211 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.563269 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563330 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.563320208 +0000 UTC m=+30.155552198 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563393 3556 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563533 3556 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563550 3556 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.563644 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563690 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.563669746 +0000 UTC m=+30.155901736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563736 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563753 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563763 3556 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.563777 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563791 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.563783729 +0000 UTC m=+30.156015719 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563898 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563928 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.563944 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564003 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.563987583 +0000 UTC m=+30.156219603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564109 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564201 3556 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564238 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.56422885 +0000 UTC m=+30.156460840 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564204 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564281 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564369 3556 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564458 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564487 3556 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564513 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.564506346 +0000 UTC m=+30.156738336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564515 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564551 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564567 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564574 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.564568457 +0000 UTC m=+30.156800437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564580 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564655 3556 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564657 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.564640549 +0000 UTC m=+30.156872569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564730 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564741 3556 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564747 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.564731631 +0000 UTC m=+30.156963651 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564823 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564889 3556 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564936 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.564925466 +0000 UTC m=+30.157157456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564899 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.564967 3556 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.564999 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565043 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565003717 +0000 UTC m=+30.157235737 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565084 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565093 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565116 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.56510762 +0000 UTC m=+30.157339610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565155 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565190 3556 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565234 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565237 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565223672 +0000 UTC m=+30.157455702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565284 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565275393 +0000 UTC m=+30.157507383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565302 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565294504 +0000 UTC m=+30.157526494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565315 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565310024 +0000 UTC m=+30.157542014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565194 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565317 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565361 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565365 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565353135 +0000 UTC m=+30.157585165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565402 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565445 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565476 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565575 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565599 3556 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565620 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565655 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565640193 +0000 UTC m=+30.157872223 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565694 3556 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565710 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565736 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565724415 +0000 UTC m=+30.157956585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565768 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565798 3556 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565831 3556 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565843 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565829697 +0000 UTC m=+30.158061727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565846 3556 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565888 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565897 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565884818 +0000 UTC m=+30.158116838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565924 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.565915369 +0000 UTC m=+30.158147589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.565967 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.565973 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566054 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566109 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566153 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566194 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.566184535 +0000 UTC m=+30.158416735 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566249 3556 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566283 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.566270667 +0000 UTC m=+30.158502867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566162 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566351 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566381 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566399 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566406 3556 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566437 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566467 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.566451601 +0000 UTC m=+30.158683621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566509 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566517 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566529 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566544 3556 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566586 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566675 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.566568814 +0000 UTC m=+30.158801014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566684 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566706 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566716 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566763 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566792 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566814 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566830 3556 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566857 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566874 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566878 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.566863982 +0000 UTC m=+30.159096002 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566252 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566911 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566924 3556 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566937 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566965 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.566953464 +0000 UTC m=+30.159185494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566994 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.566980204 +0000 UTC m=+30.159212224 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567036 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567055 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567065 3556 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567084 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567007265 +0000 UTC m=+30.159239295 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567118 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567103767 +0000 UTC m=+30.159335787 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567127 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567160 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567149648 +0000 UTC m=+30.159381828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.566799 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567191 3556 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567209 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567235 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.56722261 +0000 UTC m=+30.159454630 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567274 3556 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567281 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567314 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567302812 +0000 UTC m=+30.159535002 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567350 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567365 3556 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567395 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567407 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567395144 +0000 UTC m=+30.159627164 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567451 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567493 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567480286 +0000 UTC m=+30.159712466 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567458 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567512 3556 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567539 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567557 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567545267 +0000 UTC m=+30.159777297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567595 3556 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567634 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567622099 +0000 UTC m=+30.159854299 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567600 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567654 3556 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567684 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567698 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.56768632 +0000 UTC m=+30.159918350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567743 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567748 3556 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567803 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567862 3556 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567881 3556 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567892 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567926 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.567916505 +0000 UTC m=+30.160148705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567940 3556 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567962 3556 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567977 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568054 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568040908 +0000 UTC m=+30.160272938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568085 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568069939 +0000 UTC m=+30.160301959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.567867 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568104 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566884 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568147 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.56813735 +0000 UTC m=+30.160369340 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.568189 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568193 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568181121 +0000 UTC m=+30.160413151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.568245 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.568284 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.568320 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.568355 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.568393 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.566722 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568488 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568493 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568477759 +0000 UTC m=+30.160709789 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568535 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.56852556 +0000 UTC m=+30.160757550 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568055 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568574 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568565131 +0000 UTC m=+30.160797341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568618 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568652 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568643203 +0000 UTC m=+30.160875393 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.567976 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568684 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568675404 +0000 UTC m=+30.160907614 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568747 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568763 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568775 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568806 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568797156 +0000 UTC m=+30.161029346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568856 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568887 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.568878668 +0000 UTC m=+30.161110658 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568937 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.568963 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.5689552 +0000 UTC m=+30.161187190 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.569024 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.569058 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.569047942 +0000 UTC m=+30.161279932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.569110 3556 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.569142 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.569132814 +0000 UTC m=+30.161365024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.568443 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569191 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569244 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569278 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569313 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569347 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569380 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569417 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569452 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569521 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569563 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569601 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569639 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569677 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569713 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569748 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569780 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569813 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569848 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569887 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.569930 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569946 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.569985 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.569999 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.569976954 +0000 UTC m=+30.162208984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570062 3556 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570084 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570105 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570094697 +0000 UTC m=+30.162326687 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570140 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570155 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570191 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570181539 +0000 UTC m=+30.162413769 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570193 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570241 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570254 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570281 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570324 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570290311 +0000 UTC m=+30.162522341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570349 3556 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570383 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570373813 +0000 UTC m=+30.162606013 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570383 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570430 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570456 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570474 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570518 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570500916 +0000 UTC m=+30.162732946 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570560 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570575 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570621 3556 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570635 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570663 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.5706498 +0000 UTC m=+30.162881980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570700 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570709 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570743 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570754 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570741992 +0000 UTC m=+30.162974022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570793 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570807 3556 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.570847 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570893 3556 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570847 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570837354 +0000 UTC m=+30.163069564 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570957 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570943776 +0000 UTC m=+30.163175796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570965 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571021 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.570993577 +0000 UTC m=+30.163225797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571062 3556 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571072 3556 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571111 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571101101 +0000 UTC m=+30.163333321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571131 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571120821 +0000 UTC m=+30.163353041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571165 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571185 3556 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571223 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571213373 +0000 UTC m=+30.163445583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571187 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571236 3556 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571246 3556 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571281 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571266845 +0000 UTC m=+30.163498865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571309 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571295435 +0000 UTC m=+30.163527465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571320 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571356 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571346186 +0000 UTC m=+30.163578396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571375 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571403 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571417 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571405468 +0000 UTC m=+30.163637498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571442 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571433038 +0000 UTC m=+30.163665238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571472 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571496 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571517 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.57150588 +0000 UTC m=+30.163737900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571548 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571531151 +0000 UTC m=+30.163763171 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571561 3556 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571596 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571585932 +0000 UTC m=+30.163818142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571612 3556 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571645 3556 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571654 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571641523 +0000 UTC m=+30.163873543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571681 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571671674 +0000 UTC m=+30.163903864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571708 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571723 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571748 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571737435 +0000 UTC m=+30.163969465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571775 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571762686 +0000 UTC m=+30.163994716 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571792 3556 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571831 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571820117 +0000 UTC m=+30.164052327 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571842 3556 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571891 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.571878048 +0000 UTC m=+30.164110068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.570579 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571917 3556 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.571963 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.57195224 +0000 UTC m=+30.164184270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572049 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572090 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.572078333 +0000 UTC m=+30.164310363 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572167 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572182 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.572169665 +0000 UTC m=+30.164401875 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572190 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572214 3556 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572249 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572256 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.572245817 +0000 UTC m=+30.164478037 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572310 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.572294108 +0000 UTC m=+30.164526138 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572359 3556 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572371 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572396 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.57238618 +0000 UTC m=+30.164618360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.572424 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.57240795 +0000 UTC m=+30.164639980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.671599 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.671828 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.671882 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.671903 3556 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.671956 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672080 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.671962253 +0000 UTC m=+30.264194283 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672197 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672239 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672260 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672358 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.672326251 +0000 UTC m=+30.264558281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.672629 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672812 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672848 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672863 3556 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.672910 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672912 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.672897334 +0000 UTC m=+30.265129354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.672995 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673048 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673062 3556 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673115 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.673102149 +0000 UTC m=+30.265334169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.673130 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.673213 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673354 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673369 3556 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673394 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673395 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673425 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673438 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.673424087 +0000 UTC m=+30.265656117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.673499 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.673476238 +0000 UTC m=+30.265708378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.673722 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.673798 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.674837 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.675065 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.675171 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675244 3556 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675296 3556 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675317 3556 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675367 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675378 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.675359402 +0000 UTC m=+30.267591422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675391 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675408 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675463 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.675444304 +0000 UTC m=+30.267676324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675547 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675548 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675571 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675581 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675588 3556 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675597 3556 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675299 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675654 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.675623848 +0000 UTC m=+30.267855868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675660 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675678 3556 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675686 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.675669779 +0000 UTC m=+30.267901809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.675721 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.67570636 +0000 UTC m=+30.267938390 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.783163 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.783495 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.783574 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.783604 3556 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.783899 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.783835802 +0000 UTC m=+30.376067792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.784086 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.784313 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.784372 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.784398 3556 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.784526 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.784491298 +0000 UTC m=+30.376723358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.886985 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.887146 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887196 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887236 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887253 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887311 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.887237 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887333 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887344 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887319 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.887298006 +0000 UTC m=+30.479529996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887469 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887495 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.887487831 +0000 UTC m=+30.479720071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887504 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887521 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887529 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887535 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887547 3556 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.887478 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887573 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.887565323 +0000 UTC m=+30.479797313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.887619 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:48.887592354 +0000 UTC m=+30.479824384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912498 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912537 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912568 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912537 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912662 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912682 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912730 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912705 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912797 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912852 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912870 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912895 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.912916 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.912869 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913036 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913062 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.913065 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913110 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.913127 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913163 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913237 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.913386 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913406 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913419 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913432 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913496 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913500 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.913580 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.913733 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913805 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.913892 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.913986 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.913896 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.914075 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.914124 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.914162 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.914207 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.914327 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.914516 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.914612 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.914712 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.914782 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.914871 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.914921 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.914994 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.915030 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.915139 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.915257 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.915327 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.915446 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:44 crc kubenswrapper[3556]: I1128 00:12:44.915487 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.915631 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.915733 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.915826 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.916005 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.916377 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.916499 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.916797 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.916890 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.916897 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.917539 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.917626 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.917722 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.917912 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.918304 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:44 crc kubenswrapper[3556]: E1128 00:12:44.918475 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.191671 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"46a8560a393a5439aed7b64a6b5a18f76e9777704ab9f4b63d60bc801f21cb8a"} Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.195803 3556 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="3e9d7d5f52c9094244667feeecae161e4c0755e7c1283154e94f745d85efd041" exitCode=0 Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.195872 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"3e9d7d5f52c9094244667feeecae161e4c0755e7c1283154e94f745d85efd041"} Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.264490 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:45 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:45 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:45 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.264757 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912504 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912599 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912758 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912765 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912841 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912866 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912907 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.912964 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.912772 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.913199 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.913221 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.913374 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.913445 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.913552 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.913664 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.913714 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.913835 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.914176 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.914235 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.914296 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:45 crc kubenswrapper[3556]: I1128 00:12:45.914328 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.914482 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.914772 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.914993 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.915171 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.915324 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:45 crc kubenswrapper[3556]: E1128 00:12:45.915392 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.206287 3556 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="e8dc3d3c3320d3da5119061e7b4992d840551d90bf4e87b842500117eef3f0dd" exitCode=0 Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.206371 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"e8dc3d3c3320d3da5119061e7b4992d840551d90bf4e87b842500117eef3f0dd"} Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.226857 3556 kubelet_node_status.go:402] "Setting node annotation to enable volume controller attach/detach" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.231141 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.231491 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.231722 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.232424 3556 kubelet_node_status.go:77] "Attempting to register node" node="crc" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.246169 3556 kubelet_node_status.go:116] "Node was previously registered" node="crc" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.246608 3556 kubelet_node_status.go:80] "Successfully registered node" node="crc" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.250395 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.250606 3556 setters.go:574] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T00:12:46Z","lastTransitionTime":"2025-11-28T00:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.266222 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:46 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:46 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:46 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.266344 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912704 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912757 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912775 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912717 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912922 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912968 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913004 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913040 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912893 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913131 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913139 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.913170 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913206 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912709 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913275 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.912976 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913329 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913346 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913358 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913289 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.913499 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913317 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913515 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913590 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913148 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.913747 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.913791 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.913866 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.914070 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.914086 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.914158 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.914319 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.914355 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.914363 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.914500 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.914582 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.914582 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.914675 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.914925 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.915073 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.915192 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.915299 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:46 crc kubenswrapper[3556]: I1128 00:12:46.915384 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.915591 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.915949 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.916061 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.916328 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.916451 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.916525 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.916553 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.916729 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.916860 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.917063 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.917282 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.917513 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.917677 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.917902 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.917983 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.918125 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.918368 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.918454 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.918483 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.918693 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.918695 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.918834 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:46 crc kubenswrapper[3556]: E1128 00:12:46.919038 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.217406 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"0e8abae46f875f61a9baba43204ffb75d748b30121e4cc89d5d3403178aaa207"} Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.222263 3556 generic.go:334] "Generic (PLEG): container finished" podID="7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8" containerID="02e08d041115fd3622e863c5e637474ae651df2c0e2b13fc5a78e7b712baa2d9" exitCode=0 Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.222303 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerDied","Data":"02e08d041115fd3622e863c5e637474ae651df2c0e2b13fc5a78e7b712baa2d9"} Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.265617 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:47 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:47 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:47 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.265689 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912576 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912862 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912929 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912608 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912741 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912807 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912830 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.913076 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912846 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912889 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.912634 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.913259 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.913180 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.913385 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.913441 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.913524 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.913603 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.913716 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:47 crc kubenswrapper[3556]: I1128 00:12:47.913771 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.913861 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.913973 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.914144 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.914249 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.914373 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.914481 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.914537 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.914648 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:47 crc kubenswrapper[3556]: E1128 00:12:47.914729 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.233425 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bzj2p" event={"ID":"7dbadf0a-ba02-47d6-96a9-0995c1e8e4a8","Type":"ContainerStarted","Data":"12f6338856ecbfdd92f3f1d5544199aeca95c8b33a2f4cb402f7c6710b291016"} Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.264629 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:48 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:48 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:48 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.264713 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.621668 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.621841 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.621867 3556 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.621928 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.621977 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.621939999 +0000 UTC m=+38.214172029 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622099 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622115 3556 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622164 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622216 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622234 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.622205015 +0000 UTC m=+38.214437045 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622293 3556 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622313 3556 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622333 3556 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622315 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622376 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.622352018 +0000 UTC m=+38.214584098 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622407 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.622392969 +0000 UTC m=+38.214625129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622452 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622478 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622496 3556 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622505 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622540 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622564 3556 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622545 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.622529742 +0000 UTC m=+38.214761762 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622643 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622665 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622679 3556 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622686 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622733 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.622708106 +0000 UTC m=+38.214940126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622769 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.622752147 +0000 UTC m=+38.214984167 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622821 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622874 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622923 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622922 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622958 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.622975 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.622988 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623007 3556 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.623063 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623117 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623072296 +0000 UTC m=+38.215304366 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623156 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623164 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623182 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623197 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623164 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623142287 +0000 UTC m=+38.215374317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623287 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623310 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623326 3556 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623343 3556 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.623288 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623383 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623310 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623295041 +0000 UTC m=+38.215527071 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623384 3556 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623406 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623423 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623407983 +0000 UTC m=+38.215640003 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623432 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623434 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623447 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623435814 +0000 UTC m=+38.215667844 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.623494 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623519 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623496195 +0000 UTC m=+38.215728275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623542 3556 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623589 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623576247 +0000 UTC m=+38.215808267 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.623606 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623646 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623621308 +0000 UTC m=+38.215853438 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623667 3556 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.623712 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623720 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.62370531 +0000 UTC m=+38.215937330 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623799 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.623814 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623858 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623845733 +0000 UTC m=+38.216077763 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.623901 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623939 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.623953 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.623988 3556 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624007 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624050 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.623988956 +0000 UTC m=+38.216221076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624092 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.624066008 +0000 UTC m=+38.216298028 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624123 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624141 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624154 3556 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624185 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.62416561 +0000 UTC m=+38.216397780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624218 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.624200871 +0000 UTC m=+38.216432901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624219 3556 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624270 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.624257053 +0000 UTC m=+38.216489073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624275 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624336 3556 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624385 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.624367915 +0000 UTC m=+38.216599945 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624387 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624338 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624434 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.624422406 +0000 UTC m=+38.216654436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624525 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624579 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624630 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624653 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624678 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624726 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624789 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624799 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624789 3556 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624856 3556 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.624810 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624873 3556 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624893 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624857 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.624839127 +0000 UTC m=+38.217071147 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624941 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624981 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624946 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.624934049 +0000 UTC m=+38.217166079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.624872 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625085 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625062722 +0000 UTC m=+38.217294752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625129 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625115653 +0000 UTC m=+38.217347673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625144 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625162 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625150054 +0000 UTC m=+38.217382074 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625200 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625201 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625187895 +0000 UTC m=+38.217419925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625208 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625254 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625238206 +0000 UTC m=+38.217470316 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625274 3556 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625312 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625323 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625309447 +0000 UTC m=+38.217541467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625363 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625410 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625418 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625476 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625485 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625465701 +0000 UTC m=+38.217697731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625523 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625525 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625533 3556 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625551 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625567 3556 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625572 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625570 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625586 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625572403 +0000 UTC m=+38.217804433 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625654 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625656 3556 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625694 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625710 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625694176 +0000 UTC m=+38.217926206 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625745 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625727877 +0000 UTC m=+38.217959897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625746 3556 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625789 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625811 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625795658 +0000 UTC m=+38.218027688 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625868 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625920 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625905572 +0000 UTC m=+38.218137592 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.625923 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.625973 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625955923 +0000 UTC m=+38.218187943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626002 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.625989444 +0000 UTC m=+38.218221474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626110 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626006 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626169 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626219 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626228 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626206619 +0000 UTC m=+38.218438639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626278 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626294 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626309 3556 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626335 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626348 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626331802 +0000 UTC m=+38.218563822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626376 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626381 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626362812 +0000 UTC m=+38.218594832 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626408 3556 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626435 3556 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626449 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626454 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626439924 +0000 UTC m=+38.218672074 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626486 3556 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626501 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626487465 +0000 UTC m=+38.218719485 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626527 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626514896 +0000 UTC m=+38.218746926 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626562 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626610 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626612 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626654 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626664 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626651069 +0000 UTC m=+38.218883099 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626693 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.6266805 +0000 UTC m=+38.218912530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626723 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626725 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626735 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626769 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626755101 +0000 UTC m=+38.218987121 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626794 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626815 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626801272 +0000 UTC m=+38.219033302 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626880 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.626866794 +0000 UTC m=+38.219098824 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.626881 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627046 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.626928 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627096 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627115 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627101209 +0000 UTC m=+38.219333229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627154 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627186 3556 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627203 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627251 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627272 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627298 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627331 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627319765 +0000 UTC m=+38.219551795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627357 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627344306 +0000 UTC m=+38.219576326 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627394 3556 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627465 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627485 3556 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627538 3556 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627486 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627470669 +0000 UTC m=+38.219702699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627596 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627619 3556 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627632 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627658 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627645743 +0000 UTC m=+38.219877763 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627686 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627672433 +0000 UTC m=+38.219904453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627703 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627760 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627704494 +0000 UTC m=+38.219936524 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627820 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.627898 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.627955 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627921679 +0000 UTC m=+38.220153719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628055 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628061 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.627999751 +0000 UTC m=+38.220231891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628082 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628098 3556 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.628138 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628160 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.628140054 +0000 UTC m=+38.220372134 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.628220 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628254 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.628233646 +0000 UTC m=+38.220465676 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.628311 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628333 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.628380 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628409 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.628387259 +0000 UTC m=+38.220619289 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.628464 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628474 3556 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628530 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.628517362 +0000 UTC m=+38.220749382 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.628542 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628550 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628588 3556 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628626 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628703 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628725 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628710 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.628681047 +0000 UTC m=+38.220913087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628727 3556 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628863 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.62882927 +0000 UTC m=+38.221061300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.628636 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628908 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.628893582 +0000 UTC m=+38.221125682 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628580 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.628954 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.628927893 +0000 UTC m=+38.221160013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.629107 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629157 3556 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629227 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.629173038 +0000 UTC m=+38.221405068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.629313 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.629391 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629430 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.629455 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629496 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.629469865 +0000 UTC m=+38.221701905 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629540 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.629557 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629594 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.629579897 +0000 UTC m=+38.221811917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629596 3556 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.629641 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629665 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.629649079 +0000 UTC m=+38.221881109 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629709 3556 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629761 3556 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629804 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.629721 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629740 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.629727731 +0000 UTC m=+38.221959761 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629923 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.629894615 +0000 UTC m=+38.222126665 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.629959 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.629939856 +0000 UTC m=+38.222172006 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630054 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.629990377 +0000 UTC m=+38.222222537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630147 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630230 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630281 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630340 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.630322605 +0000 UTC m=+38.222554625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630341 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630412 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630417 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630449 3556 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630482 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.630466049 +0000 UTC m=+38.222698079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630467 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630523 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630541 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.63050983 +0000 UTC m=+38.222741930 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630572 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.630558081 +0000 UTC m=+38.222790111 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630595 3556 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630618 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630715 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630751 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.630734525 +0000 UTC m=+38.222966555 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630768 3556 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630808 3556 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630827 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630858 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.630838827 +0000 UTC m=+38.223070857 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630912 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.630965 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.630974 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.6309601 +0000 UTC m=+38.223192120 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631044 3556 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631058 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631074 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631114 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631128 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631114053 +0000 UTC m=+38.223346073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631161 3556 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631178 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631217 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631199075 +0000 UTC m=+38.223431105 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631218 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631249 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631265 3556 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631273 3556 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631293 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631294 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631282427 +0000 UTC m=+38.223514447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631328 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631314788 +0000 UTC m=+38.223546808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631275 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631355 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631341908 +0000 UTC m=+38.223573938 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631330 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631398 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631403 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.63138957 +0000 UTC m=+38.223621590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631462 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631520 3556 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631558 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631581 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631596 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631609 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631584245 +0000 UTC m=+38.223816285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631643 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631630636 +0000 UTC m=+38.223862906 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631699 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631777 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631834 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631835 3556 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631893 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631898 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631883712 +0000 UTC m=+38.224115742 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.631981 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.631964274 +0000 UTC m=+38.224196294 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.631903 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632048 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632077 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632087 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632122 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632122 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632157 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632169 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632092 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632189 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.632166028 +0000 UTC m=+38.224398088 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632192 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632219 3556 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632241 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632250 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.63222563 +0000 UTC m=+38.224457700 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632301 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.632281911 +0000 UTC m=+38.224514041 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632312 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632378 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632409 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632429 3556 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632440 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.632423674 +0000 UTC m=+38.224655704 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632487 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632501 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.632481655 +0000 UTC m=+38.224713795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632495 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632539 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.632525726 +0000 UTC m=+38.224757746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632590 3556 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632611 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632621 3556 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632648 3556 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632701 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.63268604 +0000 UTC m=+38.224918070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632700 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632802 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632865 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.632850455 +0000 UTC m=+38.225082475 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632868 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632916 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632933 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.632941 3556 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.632989 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633076 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.632997948 +0000 UTC m=+38.225230038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633084 3556 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633138 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.633125721 +0000 UTC m=+38.225357751 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633146 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633174 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633190 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.633154 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633245 3556 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633239 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.633225273 +0000 UTC m=+38.225457293 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.633298 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633321 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.633299495 +0000 UTC m=+38.225531565 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633373 3556 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.633386 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633415 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.633403167 +0000 UTC m=+38.225635187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.633476 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.633563 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633560 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633640 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.633622232 +0000 UTC m=+38.225854262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633641 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.633645 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633697 3556 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633728 3556 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633695 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.633682854 +0000 UTC m=+38.225914884 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.633848 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633869 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.633846427 +0000 UTC m=+38.226078447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633908 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.633895648 +0000 UTC m=+38.226127668 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633924 3556 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.633954 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.633969 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.63395692 +0000 UTC m=+38.226188940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634081 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634136 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634139 3556 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634194 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634222 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634198795 +0000 UTC m=+38.226430835 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634238 3556 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634265 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634287 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634274228 +0000 UTC m=+38.226506248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634331 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634332 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634383 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634400 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634450 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634405 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634385251 +0000 UTC m=+38.226617381 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634535 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634579 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634629 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634614876 +0000 UTC m=+38.226846906 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634638 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634653 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634641556 +0000 UTC m=+38.226873586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634686 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634671307 +0000 UTC m=+38.226903327 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634711 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634699288 +0000 UTC m=+38.226931308 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634754 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634777 3556 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.634805 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634848 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634829661 +0000 UTC m=+38.227061681 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634911 3556 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634922 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634968 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634956284 +0000 UTC m=+38.227188304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.634996 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.634982834 +0000 UTC m=+38.227214864 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.736311 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.736633 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.736830 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.736854 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.736890 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.736894 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.736918 3556 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.736919 3556 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.737006 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.736978523 +0000 UTC m=+38.329210583 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.737058 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.737044855 +0000 UTC m=+38.329276965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.737094 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.737139 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.737158 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.737247 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.737222239 +0000 UTC m=+38.329454259 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.737384 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.737489 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.737814 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.737912 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738062 3556 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738079 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738093 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738107 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738122 3556 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738135 3556 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738151 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.73813295 +0000 UTC m=+38.330364980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738165 3556 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738186 3556 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738187 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.738168951 +0000 UTC m=+38.330400971 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.738361 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.738333155 +0000 UTC m=+38.330565175 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.739388 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.739592 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.739690 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739687 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739742 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739766 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739791 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739821 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739838 3556 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.739842 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739851 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.73982421 +0000 UTC m=+38.332056270 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739895 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.739877351 +0000 UTC m=+38.332109371 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739995 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.739962 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740144 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.740222 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740253 3556 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740060 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740400 3556 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740412 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740444 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740460 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740479 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.740435384 +0000 UTC m=+38.332667474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740509 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.740496935 +0000 UTC m=+38.332729095 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.740667 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.74064912 +0000 UTC m=+38.332881150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.843152 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.843427 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.843476 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.843502 3556 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.843608 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.84358042 +0000 UTC m=+38.435812460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.844054 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.844204 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.844255 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.844278 3556 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.844479 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.84445024 +0000 UTC m=+38.436682260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.912624 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.912725 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.912779 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.912818 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.912726 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.912674 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.912878 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.912903 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.915108 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915156 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915195 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915210 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.915265 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915282 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915296 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915350 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915468 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.915476 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915516 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915542 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915564 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.915697 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915740 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915778 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.915841 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.915858 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.916047 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.916060 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.916218 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.916281 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.916342 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.916383 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.916457 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.916533 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.916616 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.916688 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.916714 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.916771 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.916843 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.916942 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.916989 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.917124 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.917321 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.917365 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.917501 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.917713 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.917739 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.917830 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.917910 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.917992 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.918124 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.918203 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.918308 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.918456 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.918594 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.918657 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.918739 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.918822 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.919004 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.919128 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.919228 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.919325 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.919381 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.919444 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.919493 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.919547 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.947071 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.947150 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.947208 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947250 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: I1128 00:12:48.947283 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947289 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947302 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947364 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.9473326 +0000 UTC m=+38.539564590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947459 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947481 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947495 3556 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947491 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947551 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947582 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947507 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947619 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947627 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947552 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.947535034 +0000 UTC m=+38.539767084 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947775 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.947765149 +0000 UTC m=+38.539997139 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:48 crc kubenswrapper[3556]: E1128 00:12:48.947792 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:12:56.94778626 +0000 UTC m=+38.540018250 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.264869 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:49 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:49 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:49 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.265045 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.912939 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913047 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913071 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913140 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913149 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913214 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913257 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913280 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913285 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913336 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913367 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913447 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.913503 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913581 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.913794 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:49 crc kubenswrapper[3556]: I1128 00:12:49.913804 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.914075 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.914553 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.914629 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.914736 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.914833 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.915006 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.915117 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.915256 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.915394 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.915504 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.915644 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:49 crc kubenswrapper[3556]: E1128 00:12:49.915773 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.246356 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerStarted","Data":"a592f23f00130a8b85c7f8ff874d278a6eafb49f164470cc714b0b3cb3f14565"} Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.264996 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:50 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:50 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:50 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.265159 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.912622 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.912680 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.912623 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.912745 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.912810 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.912811 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.912909 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.912925 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913000 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913052 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913076 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.912646 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913181 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.913181 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.913275 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913296 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913341 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.913409 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913423 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913437 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913478 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913543 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913554 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913608 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913629 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.913612 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.913826 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913883 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.913893 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.913959 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.914061 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.914113 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.914201 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.914249 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.914357 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.914506 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.914568 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.914725 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.914777 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.914917 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.914969 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.915170 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.915301 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.915431 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.915481 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.915764 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.915821 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.915927 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.915972 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.916134 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.916302 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.916361 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:50 crc kubenswrapper[3556]: I1128 00:12:50.916627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.916643 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.916775 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.916824 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.920353 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.916899 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.917149 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.917257 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.917430 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.919656 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.920053 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.920130 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.921415 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:50 crc kubenswrapper[3556]: E1128 00:12:50.921583 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.242884 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.242941 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.249635 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.261982 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.264768 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:51 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:51 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:51 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.264876 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.322413 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.322518 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912634 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912755 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912774 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912837 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912836 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912865 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912879 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912876 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912934 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912939 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912905 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.912993 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913205 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.913041 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913365 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913063 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:51 crc kubenswrapper[3556]: I1128 00:12:51.913064 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913467 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913547 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913615 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913745 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913869 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.913991 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.914113 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.914201 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.914261 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.914411 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:51 crc kubenswrapper[3556]: E1128 00:12:51.914493 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.253173 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.264955 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:52 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:52 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:52 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.265113 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.912833 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.912950 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.912979 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.912994 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913080 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913096 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.915316 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913127 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913116 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913180 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.915520 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913198 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.915651 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913198 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913224 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913238 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913230 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.915799 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913248 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913248 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913303 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913309 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913315 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913328 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.916297 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913341 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913352 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913361 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.916439 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913364 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913372 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913382 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.916572 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913403 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913406 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913405 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913408 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913426 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.916801 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913430 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:52 crc kubenswrapper[3556]: I1128 00:12:52.913520 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.916139 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.916909 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.917149 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.917284 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.917599 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.917726 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.917895 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.918133 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.918210 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.918391 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.918672 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.918811 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.918949 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.919118 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.919325 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.919408 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.919725 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.919947 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.920218 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.920369 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.920530 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.920693 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.920908 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.921123 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:52 crc kubenswrapper[3556]: E1128 00:12:52.921300 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.256610 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.264810 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:53 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:53 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:53 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.264937 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.912815 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.913740 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.912963 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.912983 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.914191 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.912999 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913041 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913059 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.914430 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913077 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.914526 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913102 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913110 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913136 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913159 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913163 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913172 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:53 crc kubenswrapper[3556]: I1128 00:12:53.913194 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.914663 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.914801 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.914933 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.915189 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.915311 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.915400 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.915521 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.915694 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.915810 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:53 crc kubenswrapper[3556]: E1128 00:12:53.915920 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.265653 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:54 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:54 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:54 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.265765 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.912603 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.912699 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.912765 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.912790 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.912809 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.912890 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.912918 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.912954 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913049 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913103 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913122 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913157 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913171 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913241 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.913247 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913276 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913312 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913374 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913373 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913382 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913429 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913466 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913465 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913499 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913537 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913495 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913578 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913581 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913612 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913786 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913613 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.913962 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.914079 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.914129 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.914123 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.914414 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:54 crc kubenswrapper[3556]: I1128 00:12:54.914421 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.914645 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.914884 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.915118 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.915287 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.915844 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.915875 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.915990 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.916316 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.916620 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.916840 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.917263 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.917379 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.917714 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.917891 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.918061 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.918253 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.918425 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.918613 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.918758 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.919123 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.919204 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.919433 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.919745 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.919934 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.920100 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.920308 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.920582 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.920678 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:54 crc kubenswrapper[3556]: E1128 00:12:54.920974 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.264637 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:55 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:55 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:55 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.264740 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912306 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912370 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912462 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912399 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912485 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912546 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912486 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912614 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912673 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912750 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.912798 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.912821 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.912977 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.913075 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.913220 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:55 crc kubenswrapper[3556]: I1128 00:12:55.913445 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.913639 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.913740 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.914001 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.914230 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.914385 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.914564 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.914786 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.914927 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.915629 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.915950 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:55 crc kubenswrapper[3556]: E1128 00:12:55.916296 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.266500 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:56 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:56 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:56 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.266571 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640678 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640738 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640776 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640806 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640838 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640868 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640900 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640932 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.640965 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641002 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641065 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641098 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641128 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641160 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641191 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641221 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641252 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641280 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641325 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641357 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641401 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641430 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641460 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641491 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641521 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641551 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641603 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641647 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641678 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641709 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641739 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641795 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641823 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641852 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641882 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641913 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641945 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.641976 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642024 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642058 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642088 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642115 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642145 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642175 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642208 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642240 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642271 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642313 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642355 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642386 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642419 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642449 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642479 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642502 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642531 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642561 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642591 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642621 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642650 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642691 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642716 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642753 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642782 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642811 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642833 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642853 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642887 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642909 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642939 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642961 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.642983 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643030 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643056 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643079 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643104 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643128 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643159 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643190 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643213 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643234 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643255 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643276 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643297 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643319 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643342 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643361 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643381 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643402 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643421 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643442 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643475 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643499 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643520 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643542 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643572 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643592 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643613 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643633 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643653 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643672 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643697 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643721 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643748 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643768 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643787 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643817 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643837 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643858 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643887 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643919 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643947 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643968 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.643987 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.644024 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.644045 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.644067 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.644089 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.644110 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.644132 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.644154 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.644174 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644283 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644329 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644317547 +0000 UTC m=+54.236549527 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644370 3556 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644390 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644384379 +0000 UTC m=+54.236616369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644415 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644433 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.64442828 +0000 UTC m=+54.236660270 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644465 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644485 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644479652 +0000 UTC m=+54.236711642 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644516 3556 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644534 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644529183 +0000 UTC m=+54.236761173 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644561 3556 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644582 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644577264 +0000 UTC m=+54.236809244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644611 3556 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644627 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644622195 +0000 UTC m=+54.236854185 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644654 3556 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644671 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644666296 +0000 UTC m=+54.236898286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644699 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644716 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644711437 +0000 UTC m=+54.236943427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644747 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644766 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644760398 +0000 UTC m=+54.236992388 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.644981 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.644974923 +0000 UTC m=+54.237206913 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645053 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645063 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645073 3556 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645096 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645089776 +0000 UTC m=+54.237321766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645129 3556 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645149 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645143857 +0000 UTC m=+54.237375847 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645180 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645199 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645193418 +0000 UTC m=+54.237425408 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645233 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645260 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645253009 +0000 UTC m=+54.237484999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645298 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645309 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645317 3556 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645337 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645332191 +0000 UTC m=+54.237564181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645370 3556 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645388 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645383022 +0000 UTC m=+54.237615012 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645419 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645436 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645431684 +0000 UTC m=+54.237663674 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645462 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645480 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645474555 +0000 UTC m=+54.237706545 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645503 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645519 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645514505 +0000 UTC m=+54.237746495 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645551 3556 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645585 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645563877 +0000 UTC m=+54.237795867 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645618 3556 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645635 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645630508 +0000 UTC m=+54.237862498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645661 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645678 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645673959 +0000 UTC m=+54.237905949 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645707 3556 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645724 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.64571979 +0000 UTC m=+54.237951780 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645753 3556 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645770 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645765041 +0000 UTC m=+54.237997031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645795 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645814 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645808182 +0000 UTC m=+54.238040172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645843 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645861 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645856794 +0000 UTC m=+54.238088784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645887 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645905 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645900485 +0000 UTC m=+54.238132475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645932 3556 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645953 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645946926 +0000 UTC m=+54.238178916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645978 3556 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.645996 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.645989787 +0000 UTC m=+54.238221777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646041 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646059 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646053689 +0000 UTC m=+54.238285679 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646085 3556 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646102 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.6460967 +0000 UTC m=+54.238328690 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646138 3556 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646145 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646164 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646159351 +0000 UTC m=+54.238391341 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646189 3556 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646206 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646201542 +0000 UTC m=+54.238433532 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646234 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646252 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646246973 +0000 UTC m=+54.238478963 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646279 3556 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646298 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646292574 +0000 UTC m=+54.238524564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646331 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646340 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646347 3556 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646365 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646360186 +0000 UTC m=+54.238592166 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646403 3556 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646410 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646429 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646422137 +0000 UTC m=+54.238654127 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646460 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646477 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646472158 +0000 UTC m=+54.238704148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646503 3556 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646522 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646516939 +0000 UTC m=+54.238748929 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646556 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646566 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646572 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646593 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646586671 +0000 UTC m=+54.238818661 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646620 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646638 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646633022 +0000 UTC m=+54.238865012 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646663 3556 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646680 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646675103 +0000 UTC m=+54.238907093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646704 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646722 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646716524 +0000 UTC m=+54.238948514 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646754 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646763 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646770 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646791 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646784955 +0000 UTC m=+54.239016945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646827 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646836 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646843 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646861 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646855997 +0000 UTC m=+54.239087987 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646898 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646907 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646913 3556 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646931 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.646925788 +0000 UTC m=+54.239157778 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646964 3556 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.646982 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.64697611 +0000 UTC m=+54.239208100 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647024 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647046 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647039871 +0000 UTC m=+54.239271861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647074 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647091 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647086082 +0000 UTC m=+54.239318072 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647126 3556 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647136 3556 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647143 3556 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647162 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647156124 +0000 UTC m=+54.239388114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647199 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647208 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647214 3556 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647234 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647227395 +0000 UTC m=+54.239459385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647273 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647283 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647289 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647308 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647302058 +0000 UTC m=+54.239534048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647335 3556 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647353 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647347809 +0000 UTC m=+54.239579799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647378 3556 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647397 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.64739182 +0000 UTC m=+54.239623810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647432 3556 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647449 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647444561 +0000 UTC m=+54.239676551 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647477 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647496 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647489632 +0000 UTC m=+54.239721622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647525 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647542 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647537113 +0000 UTC m=+54.239769103 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647572 3556 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647591 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647585704 +0000 UTC m=+54.239817684 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647617 3556 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647634 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647629165 +0000 UTC m=+54.239861155 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647662 3556 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647679 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647674626 +0000 UTC m=+54.239906606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647710 3556 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647728 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647723168 +0000 UTC m=+54.239955158 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647754 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647773 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647767099 +0000 UTC m=+54.239999099 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647803 3556 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647821 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.6478163 +0000 UTC m=+54.240048290 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647847 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647870 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647862881 +0000 UTC m=+54.240094871 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647907 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647929 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647922612 +0000 UTC m=+54.240154602 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647967 3556 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.647985 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.647979473 +0000 UTC m=+54.240211463 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648028 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648047 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648041225 +0000 UTC m=+54.240273215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648077 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648095 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648090526 +0000 UTC m=+54.240322516 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648129 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648147 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648141287 +0000 UTC m=+54.240373277 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648178 3556 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648201 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648196168 +0000 UTC m=+54.240428148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648229 3556 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648247 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648241749 +0000 UTC m=+54.240473739 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648273 3556 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648289 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.64828432 +0000 UTC m=+54.240516310 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648328 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648337 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648346 3556 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648365 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648359592 +0000 UTC m=+54.240591582 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648395 3556 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648412 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648406773 +0000 UTC m=+54.240638763 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648449 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648459 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648467 3556 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648487 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648481205 +0000 UTC m=+54.240713195 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648520 3556 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648526 3556 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648545 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648538526 +0000 UTC m=+54.240770516 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648581 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648590 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648597 3556 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648616 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648610288 +0000 UTC m=+54.240842278 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648657 3556 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648668 3556 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648674 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648693 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.64868808 +0000 UTC m=+54.240920070 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648727 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648745 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648740462 +0000 UTC m=+54.240972452 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648781 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648793 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648800 3556 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648819 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648813233 +0000 UTC m=+54.241045223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648855 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648866 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648873 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648891 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648886245 +0000 UTC m=+54.241118235 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648928 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648939 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648945 3556 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648966 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.648959407 +0000 UTC m=+54.241191397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.648996 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649031 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649025348 +0000 UTC m=+54.241257338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649068 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649078 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649085 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649105 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.64909919 +0000 UTC m=+54.241331180 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649134 3556 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649151 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649145901 +0000 UTC m=+54.241377891 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649183 3556 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649202 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649195932 +0000 UTC m=+54.241427912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649231 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649249 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649243033 +0000 UTC m=+54.241475023 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649277 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649295 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649289914 +0000 UTC m=+54.241521904 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649320 3556 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649339 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649333295 +0000 UTC m=+54.241565285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649370 3556 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649387 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649381946 +0000 UTC m=+54.241613936 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649417 3556 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649436 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649430287 +0000 UTC m=+54.241662277 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649463 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649484 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649478868 +0000 UTC m=+54.241710858 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649509 3556 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649527 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649522179 +0000 UTC m=+54.241754169 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649552 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649592 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649585581 +0000 UTC m=+54.241817571 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649620 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649638 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649633122 +0000 UTC m=+54.241865112 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649677 3556 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649686 3556 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649693 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649712 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649706514 +0000 UTC m=+54.241938504 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649743 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649761 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649756155 +0000 UTC m=+54.241988145 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649793 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649813 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649806616 +0000 UTC m=+54.242038606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649850 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649859 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649865 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649883 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649877628 +0000 UTC m=+54.242109618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649914 3556 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649932 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.649927459 +0000 UTC m=+54.242159449 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649961 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.649978 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.64997345 +0000 UTC m=+54.242205440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650023 3556 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650045 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650038272 +0000 UTC m=+54.242270262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650073 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650091 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650085363 +0000 UTC m=+54.242317353 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650132 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650142 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650149 3556 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650169 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650163135 +0000 UTC m=+54.242395125 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650198 3556 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650217 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650211996 +0000 UTC m=+54.242443996 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650247 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650265 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650259627 +0000 UTC m=+54.242491617 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650291 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650308 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650303298 +0000 UTC m=+54.242535288 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650338 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650355 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650349819 +0000 UTC m=+54.242581809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650385 3556 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650404 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.65039788 +0000 UTC m=+54.242629870 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650430 3556 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650448 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650442841 +0000 UTC m=+54.242674831 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650478 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650499 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650493872 +0000 UTC m=+54.242725862 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650527 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650546 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650540914 +0000 UTC m=+54.242772904 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650579 3556 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650595 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650590605 +0000 UTC m=+54.242822595 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650629 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650648 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650642516 +0000 UTC m=+54.242874506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650676 3556 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650695 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650689337 +0000 UTC m=+54.242921327 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650721 3556 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650739 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650733698 +0000 UTC m=+54.242965688 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650767 3556 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650784 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650779519 +0000 UTC m=+54.243011509 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650812 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650829 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.65082451 +0000 UTC m=+54.243056500 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650857 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650876 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650870951 +0000 UTC m=+54.243102941 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650903 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.650922 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.650916662 +0000 UTC m=+54.243148652 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.745603 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.745761 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.745812 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.746155 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.746255 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.747143 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747402 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747442 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747461 3556 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747589 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747610 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747623 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747717 3556 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747736 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747835 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747855 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747868 3556 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747964 3556 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.747985 3556 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.748041 3556 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.748259 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.748365 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.748526 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.748764 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.749133 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749387 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749416 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749431 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749485 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.749464051 +0000 UTC m=+54.341696061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749519 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.749505342 +0000 UTC m=+54.341737352 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749568 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.749556373 +0000 UTC m=+54.341788383 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749592 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.749578463 +0000 UTC m=+54.341810463 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749612 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.749601984 +0000 UTC m=+54.341833984 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749633 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.749622344 +0000 UTC m=+54.341854354 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749712 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749732 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749746 3556 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749790 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.749777298 +0000 UTC m=+54.342009308 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749868 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749889 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749904 3556 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.749947 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.749933322 +0000 UTC m=+54.342165332 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750052 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750073 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750085 3556 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750130 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.750116456 +0000 UTC m=+54.342348466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750209 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750229 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750243 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750285 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.750271969 +0000 UTC m=+54.342503969 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750399 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750418 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750431 3556 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.750467 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.750456593 +0000 UTC m=+54.342688603 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.851935 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.852262 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.852327 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.852351 3556 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.852471 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.852441162 +0000 UTC m=+54.444673182 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.852665 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.852938 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.852992 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.853047 3556 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.853148 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.853122529 +0000 UTC m=+54.445354559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.916942 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.917206 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.917286 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.917504 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.917573 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.917711 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.917773 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.917916 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.917977 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.918123 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.918181 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.918308 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.918374 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.918514 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.918572 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.918694 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.918754 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.918870 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.918925 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.919139 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.919265 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.919402 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.919503 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.919687 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.919799 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.919971 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.920121 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.920288 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.920385 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.920522 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.920587 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.920715 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.920788 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.920961 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.921089 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.921246 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.921338 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.921479 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.921540 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.921660 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.921728 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.921851 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.921909 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.922342 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.922418 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.922530 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.922627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.922664 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.922744 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.922746 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.922882 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.922550 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.922890 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.923116 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.923216 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.923366 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.923375 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.923472 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.923638 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.923502 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.923578 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.923797 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.924001 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.924190 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.924349 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.924524 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955095 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955157 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955181 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955271 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.95524613 +0000 UTC m=+54.547478160 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.955314 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.955466 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.955592 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955743 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955751 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955824 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955841 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955768 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955874 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955919 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.955897376 +0000 UTC m=+54.548129376 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.955948 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.955931376 +0000 UTC m=+54.548163406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: I1128 00:12:56.956138 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.956362 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.956386 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.956402 3556 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:56 crc kubenswrapper[3556]: E1128 00:12:56.956688 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:13:12.956635512 +0000 UTC m=+54.548867532 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.270792 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:57 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:57 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:57 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.270953 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.273252 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-wwpnd" event={"ID":"2b6d14a5-ca00-40c7-af7a-051a98a24eed","Type":"ContainerStarted","Data":"924aaba6eb34a9a3d11f26c0c7721da7e0495317a274e96f5375ad6332bf1524"} Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.912878 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.912960 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.912972 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913178 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913199 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913221 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913294 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913325 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913303 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913395 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913367 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913465 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913593 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:57 crc kubenswrapper[3556]: I1128 00:12:57.913633 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.913811 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.913970 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.914147 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.914220 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.914368 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.914536 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.914695 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.914956 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.914980 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.915123 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.915286 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.915490 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.915558 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:57 crc kubenswrapper[3556]: E1128 00:12:57.915784 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.264270 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:58 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:58 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:58 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.264370 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912355 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912466 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912506 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912544 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912534 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912599 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912476 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912626 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912476 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912622 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912685 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912713 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912499 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912739 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912736 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912760 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912799 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912839 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912803 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912894 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912951 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.912898 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.913042 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.913093 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.913170 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.915236 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.915371 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.915527 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.915530 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.915632 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.915719 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.915828 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.915888 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.916004 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.916071 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.916169 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.916195 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.916365 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.916509 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.916688 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.916729 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.916765 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.916588 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.916665 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.917253 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:12:58 crc kubenswrapper[3556]: I1128 00:12:58.917308 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.917373 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.917546 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.917682 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.917856 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.917965 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.918109 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.918251 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.918378 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.918558 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.918695 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.918869 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.919052 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.919181 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.919242 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.919309 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.919413 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.919593 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.919711 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:12:58 crc kubenswrapper[3556]: E1128 00:12:58.919753 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.264178 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:12:59 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:12:59 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:12:59 crc kubenswrapper[3556]: healthz check failed Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.264309 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913225 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913295 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913324 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913354 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913261 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913380 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913393 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913330 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913278 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913279 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913347 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913361 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913401 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:12:59 crc kubenswrapper[3556]: I1128 00:12:59.913346 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.913718 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.913763 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.913831 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.913954 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914039 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914111 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914187 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914232 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914305 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914356 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914470 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914547 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914603 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:12:59 crc kubenswrapper[3556]: E1128 00:12:59.914657 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.263913 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:00 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:00 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:00 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.263992 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912169 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912201 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912224 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912227 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912253 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912268 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912285 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912330 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912545 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912575 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912575 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912557 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912660 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.912579 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912603 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912687 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912666 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912637 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912687 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912730 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912709 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912610 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912775 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912776 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912826 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912735 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.912879 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912753 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912981 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.912946 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.913088 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.913130 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.913146 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.913152 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.913276 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:00 crc kubenswrapper[3556]: I1128 00:13:00.913484 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.913524 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.913640 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.913766 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.913862 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.913960 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.914075 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.914168 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.914256 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.914484 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.914607 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.914719 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.914810 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915068 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915285 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915399 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915463 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915475 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915634 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915753 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915871 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.915961 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.916082 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.916209 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.916293 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.916367 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.916468 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.916560 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.916658 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:00 crc kubenswrapper[3556]: E1128 00:13:00.916741 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.264466 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:01 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:01 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:01 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.264558 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912573 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912595 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.913423 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912689 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.913713 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912729 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912753 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912781 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912811 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.913915 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912795 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912777 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912844 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.914160 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912870 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912899 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912926 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:01 crc kubenswrapper[3556]: I1128 00:13:01.912950 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.914289 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.914472 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.914590 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.914690 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.914809 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.914927 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.915067 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.915166 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.915280 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:01 crc kubenswrapper[3556]: E1128 00:13:01.915368 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.264139 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:02 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:02 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:02 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.264219 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.857498 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.857693 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.910838 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.912962 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913000 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.912962 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913061 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913007 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913202 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913211 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913226 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913255 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913283 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913298 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913311 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913321 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913329 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913346 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913371 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913317 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913399 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913424 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913435 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913236 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913463 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913489 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913507 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913560 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913265 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913265 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913336 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913232 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.913390 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.913813 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.913818 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.913982 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914113 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914203 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914266 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914331 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914422 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914509 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.914542 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.914590 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914637 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914732 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914799 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914863 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.914929 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.914959 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915059 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915121 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915175 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915220 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915266 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915308 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915379 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915463 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915525 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915579 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915636 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915696 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915746 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915801 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915865 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915912 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.915962 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.916037 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:02 crc kubenswrapper[3556]: E1128 00:13:02.916356 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:02 crc kubenswrapper[3556]: I1128 00:13:02.963049 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.264804 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:03 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:03 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:03 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.264906 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912257 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912327 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912326 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912271 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912355 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912420 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912439 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.912433 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.912512 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912257 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912520 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912562 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912634 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.912588 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.912906 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.912956 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.913047 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:03 crc kubenswrapper[3556]: I1128 00:13:03.913250 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913276 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913361 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913466 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913592 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913666 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913807 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913860 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913917 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.913988 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:03 crc kubenswrapper[3556]: E1128 00:13:03.914082 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.264581 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:04 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:04 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:04 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.265096 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.912647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.912700 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.912934 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.912939 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913160 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.913317 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913383 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913446 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913465 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913521 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913533 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913584 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.913631 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913379 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913642 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913649 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913772 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913842 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.913878 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.913813 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914054 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.914057 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914122 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914146 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914187 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914227 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914212 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914268 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.914335 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914436 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914445 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.914518 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914520 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914598 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.914621 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914679 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914737 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.914835 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.915278 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.915361 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.915434 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.915489 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.915599 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.915731 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.915868 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.915997 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.916196 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:04 crc kubenswrapper[3556]: I1128 00:13:04.916253 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.916471 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.916627 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.916766 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.916927 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.917084 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.917396 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.917426 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.917508 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.917597 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.917603 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.917741 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.917942 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.918101 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.918807 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.918881 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.918902 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.918986 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:04 crc kubenswrapper[3556]: E1128 00:13:04.919089 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.264664 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:05 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:05 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:05 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.264780 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.912797 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.912826 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.912905 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.912965 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.914486 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913055 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913098 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913141 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913195 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913245 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913309 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913373 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913383 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913405 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:05 crc kubenswrapper[3556]: I1128 00:13:05.913412 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.914686 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.914782 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.914926 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.915101 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.915405 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.915601 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.915810 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.915943 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.916465 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.916679 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.916788 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.916930 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:05 crc kubenswrapper[3556]: E1128 00:13:05.917124 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.265563 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:06 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:06 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:06 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.266309 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912387 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912524 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912589 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912665 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912701 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912784 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912797 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912860 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912903 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912538 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912938 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912967 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.912603 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.912915 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913000 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913079 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.913150 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913191 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913207 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913243 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913243 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913264 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913288 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913241 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913357 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913370 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913214 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913400 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913436 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913407 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913486 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913580 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913619 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.913414 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.913644 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.913840 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.914167 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.914437 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.914490 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.914592 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.914745 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.914966 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.915070 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.915159 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.915372 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:06 crc kubenswrapper[3556]: I1128 00:13:06.915520 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.915695 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.915744 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.915833 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.915897 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.916058 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.916171 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.916395 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.916540 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.916761 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.916879 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.916981 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.917111 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.917215 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.917550 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.917557 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.917742 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.917938 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.918048 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.918160 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:06 crc kubenswrapper[3556]: E1128 00:13:06.918317 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.263877 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:07 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:07 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:07 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.264000 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912344 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912405 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912422 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912367 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912434 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912498 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912345 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912533 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912561 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912577 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.912581 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912585 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.912858 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912618 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912630 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:07 crc kubenswrapper[3556]: I1128 00:13:07.912626 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.912783 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.913477 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.913570 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.913481 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.913647 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.913772 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.913864 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.914067 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.914082 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.914167 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.914226 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:07 crc kubenswrapper[3556]: E1128 00:13:07.914339 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.263797 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:08 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:08 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:08 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.263924 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.912842 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.912905 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.912960 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.913001 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.913089 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.913118 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.913149 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.913086 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.912851 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.912868 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.916553 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.916617 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.916713 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.916717 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.916803 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.916827 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.916840 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.916844 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917067 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.917116 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.917149 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917206 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917262 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917269 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917296 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917300 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.917450 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917602 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.917652 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917706 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917712 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.917756 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917784 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.917810 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917860 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.917893 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.918043 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.918067 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918084 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918155 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918348 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918374 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918505 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918610 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918686 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918784 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.918838 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.918883 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.918906 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:08 crc kubenswrapper[3556]: I1128 00:13:08.919028 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.919221 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.919241 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.919374 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.919494 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.919600 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.919737 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.919892 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.920121 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.920290 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.920758 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.920779 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.920902 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.920992 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.921066 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.921223 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:08 crc kubenswrapper[3556]: E1128 00:13:08.921226 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.265563 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:09 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:09 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:09 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.265689 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.911980 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912123 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912119 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912172 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912263 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.912325 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912365 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912452 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912470 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912514 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912615 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.912752 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912778 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.912835 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.912946 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.913094 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.913199 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.913270 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.913462 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.913620 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.915042 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.915284 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.915330 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.915450 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:09 crc kubenswrapper[3556]: I1128 00:13:09.915539 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.915582 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.915793 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:09 crc kubenswrapper[3556]: E1128 00:13:09.915952 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.265063 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:10 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:10 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:10 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.265202 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.912896 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.913085 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.913256 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.913292 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.913376 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.914093 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.914187 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.914352 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.914468 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.914644 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.914673 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.913726 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.914811 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.914986 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.915144 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.915249 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.915362 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.915517 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.915573 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.915746 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.915987 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.916059 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.916118 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.916205 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.913308 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.916310 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.913337 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.916578 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.916583 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.913860 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.914003 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.913268 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.914034 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.916587 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.916908 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.917004 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.917247 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.917374 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.917534 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.917648 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.917815 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.918158 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.918295 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.918544 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.918806 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.919049 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.919364 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.919462 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.919583 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.919714 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.920081 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.920405 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.920909 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.921440 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.921616 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.921853 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.922200 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.922280 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:10 crc kubenswrapper[3556]: I1128 00:13:10.922280 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.922443 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.922705 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.924589 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.925110 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.925140 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.925224 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:10 crc kubenswrapper[3556]: E1128 00:13:10.925354 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.265113 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:11 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:11 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:11 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.265234 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.912957 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913067 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913152 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913171 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913078 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913352 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913355 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913530 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.913610 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913662 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913730 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.913773 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.914297 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.913893 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.914107 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.914108 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.914188 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.914439 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.914541 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.914590 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:11 crc kubenswrapper[3556]: I1128 00:13:11.914658 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.914705 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.914822 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.915072 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.915146 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.915377 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.915613 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:11 crc kubenswrapper[3556]: E1128 00:13:11.915770 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.265176 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:12 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:12 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:12 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.265658 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.724738 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.724831 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.724908 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.724948 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.724994 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725076 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725130 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725184 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725234 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725285 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725325 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725369 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725412 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725456 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725501 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725550 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725626 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725699 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725743 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725855 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725903 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.725958 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726005 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726083 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726128 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726172 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726251 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726314 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726361 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726431 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726502 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726574 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726651 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726704 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726778 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726826 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726912 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.726962 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.727029 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727281 3556 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727331 3556 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727390 3556 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727408 3556 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727478 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727542 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727567 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727572 3556 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727588 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727613 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727646 3556 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727573 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727687 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727714 3556 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727759 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727774 3556 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727825 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727841 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727850 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727878 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727914 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727914 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727932 3556 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727433 3556 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727720 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727997 3556 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728048 3556 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728083 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728089 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727442 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728157 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728169 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728175 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728188 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728239 3556 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727700 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728282 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728293 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728313 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728326 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728347 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727778 3556 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728401 3556 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728406 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728454 3556 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728538 3556 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728570 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728177 3556 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728457 3556 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728304 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728330 3556 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728100 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728856 3556 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.727779 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728925 3556 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.728742 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.729135 3556 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.732394 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732550 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732501323 +0000 UTC m=+86.324733333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732605 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732585295 +0000 UTC m=+86.324817305 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732652 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732635406 +0000 UTC m=+86.324867416 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732694 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732678467 +0000 UTC m=+86.324910467 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732495 3556 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732763 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732746318 +0000 UTC m=+86.324978328 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732803 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732787379 +0000 UTC m=+86.325019379 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732838 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.73282476 +0000 UTC m=+86.325056770 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732891 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732849771 +0000 UTC m=+86.325081771 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732915 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732903072 +0000 UTC m=+86.325135072 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.732976 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732936163 +0000 UTC m=+86.325168173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733038 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.732993754 +0000 UTC m=+86.325225754 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733094 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733074386 +0000 UTC m=+86.325306386 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733122 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733109107 +0000 UTC m=+86.325341107 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733152 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733136957 +0000 UTC m=+86.325368957 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733278 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733188978 +0000 UTC m=+86.325421028 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733349 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733319351 +0000 UTC m=+86.325551381 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733416 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733370892 +0000 UTC m=+86.325602922 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733474 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733437015 +0000 UTC m=+86.325669055 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733514 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733492586 +0000 UTC m=+86.325724616 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733570 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733533417 +0000 UTC m=+86.325765457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733639 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733618429 +0000 UTC m=+86.325850469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733680 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.73365686 +0000 UTC m=+86.325888890 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733727 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733701931 +0000 UTC m=+86.325933991 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733794 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733772913 +0000 UTC m=+86.326004953 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733834 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733811593 +0000 UTC m=+86.326043623 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733875 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733853394 +0000 UTC m=+86.326085434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733939 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733901416 +0000 UTC m=+86.326133446 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.733977 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733955337 +0000 UTC m=+86.326187377 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734054 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.733990798 +0000 UTC m=+86.326222838 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734115 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734074769 +0000 UTC m=+86.326306819 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734160 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734133781 +0000 UTC m=+86.326365821 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734207 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734176552 +0000 UTC m=+86.326408592 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734245 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734223883 +0000 UTC m=+86.326455913 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734305 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734282224 +0000 UTC m=+86.326514254 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734356 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734324945 +0000 UTC m=+86.326556975 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734396 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734376946 +0000 UTC m=+86.326608986 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734453 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734413087 +0000 UTC m=+86.326645137 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734502 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734471108 +0000 UTC m=+86.326703148 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734537 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.73451798 +0000 UTC m=+86.326750010 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.734820 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.734787046 +0000 UTC m=+86.327019186 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.735502 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.735663 3556 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.736278 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.736382 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736400 3556 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736422 3556 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.736461 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736479 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.736447105 +0000 UTC m=+86.328679255 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736515 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.736501227 +0000 UTC m=+86.328733237 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736533 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736555 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.736562 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736570 3556 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.736625 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736633 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736669 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736694 3556 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736695 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.736681421 +0000 UTC m=+86.328913421 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736762 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736773 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.736747122 +0000 UTC m=+86.328979272 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.736675 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736801 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736811 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.736797343 +0000 UTC m=+86.329029583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736667 3556 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736836 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736860 3556 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736924 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.736900366 +0000 UTC m=+86.329132366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736941 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736952 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.736938257 +0000 UTC m=+86.329170257 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.736863 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736966 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.736984 3556 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.737120 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.737174 3556 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.737524 3556 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.737539 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.737223 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.737199552 +0000 UTC m=+86.329431722 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.737976 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.738128 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.738046463 +0000 UTC m=+86.330278493 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.738182 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.738281 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.738808 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.738898 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.738934 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.738966 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.738999 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739065 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739100 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739133 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739169 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739208 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739241 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739273 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739311 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739357 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739391 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739425 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739457 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739488 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739521 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739564 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739600 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739645 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739675 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739707 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739737 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.739769 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.739941 3556 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.739966 3556 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.739980 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740082 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.74006693 +0000 UTC m=+86.332298930 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740078 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740108 3556 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740145 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.740131651 +0000 UTC m=+86.332363661 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740155 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740176 3556 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740204 3556 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740170 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.740156432 +0000 UTC m=+86.332388672 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740208 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740276 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.740251744 +0000 UTC m=+86.332483764 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740286 3556 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740304 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740309 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.740291945 +0000 UTC m=+86.332524155 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740241 3556 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740531 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740525 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740344 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.740324025 +0000 UTC m=+86.332556055 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740753 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.740711765 +0000 UTC m=+86.332943755 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740755 3556 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740352 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740850 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740374 3556 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740873 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740897 3556 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740921 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740391 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.738564 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740953 3556 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740988 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741004 3556 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740429 3556 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741073 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740439 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740460 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.738352 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741252 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740506 3556 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740499 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740524 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741412 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741500 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.740560 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740561 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741618 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740780 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.740786 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.740773437 +0000 UTC m=+86.333005427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741738 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.741717968 +0000 UTC m=+86.333949998 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741767 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.741752359 +0000 UTC m=+86.333984389 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741798 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.741781219 +0000 UTC m=+86.334013249 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741826 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.74181276 +0000 UTC m=+86.334044790 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741853 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.741838862 +0000 UTC m=+86.334070892 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741888 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.741873283 +0000 UTC m=+86.334105323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741920 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.741906883 +0000 UTC m=+86.334138913 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.741964 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.741948774 +0000 UTC m=+86.334180804 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742002 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.741986675 +0000 UTC m=+86.334218705 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742074 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742060897 +0000 UTC m=+86.334292927 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742113 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742096848 +0000 UTC m=+86.334328878 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742151 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742137219 +0000 UTC m=+86.334369249 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742180 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742166179 +0000 UTC m=+86.334398209 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742213 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.74219294 +0000 UTC m=+86.334424970 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742293 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742231191 +0000 UTC m=+86.334463451 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742346 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742323163 +0000 UTC m=+86.334555403 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742389 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742374394 +0000 UTC m=+86.334606424 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742424 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742411645 +0000 UTC m=+86.334643685 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742454 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742441375 +0000 UTC m=+86.334673405 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.742511 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.742617 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742618 3556 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742662 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742615909 +0000 UTC m=+86.334847919 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742690 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742681181 +0000 UTC m=+86.334913181 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.742723 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742736 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.742779 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742789 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742774103 +0000 UTC m=+86.335006133 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742802 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742817 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742802614 +0000 UTC m=+86.335034644 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742844 3556 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742886 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742869685 +0000 UTC m=+86.335101695 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.742908 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.742899396 +0000 UTC m=+86.335131396 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.742916 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.742979 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743062 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743117 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743116 3556 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743165 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743214 3556 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743239 3556 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743220 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.743184702 +0000 UTC m=+86.335416732 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743413 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743125 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743493 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743505 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.74348753 +0000 UTC m=+86.335719560 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743325 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743557 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743571 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.743554842 +0000 UTC m=+86.335786872 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743603 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.743587282 +0000 UTC m=+86.335819312 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743629 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.743616493 +0000 UTC m=+86.335848533 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743666 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743699 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743722 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743757 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.743708825 +0000 UTC m=+86.335940855 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743774 3556 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743799 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.743778497 +0000 UTC m=+86.336010527 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743878 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743873 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.743984 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.743985 3556 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744063 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.743967821 +0000 UTC m=+86.336199971 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744105 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744085624 +0000 UTC m=+86.336317654 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744132 3556 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744219 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744192806 +0000 UTC m=+86.336425026 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744308 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744373 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744439 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744416791 +0000 UTC m=+86.336648931 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744465 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744478 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744458762 +0000 UTC m=+86.336690972 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744533 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744569 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744593 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744598 3556 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744623 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744609566 +0000 UTC m=+86.336841566 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744646 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744635767 +0000 UTC m=+86.336867767 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744679 3556 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744688 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744728 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744746 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744722089 +0000 UTC m=+86.336954119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744775 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744807 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744797031 +0000 UTC m=+86.337029031 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744814 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744857 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744890 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744921 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.744904863 +0000 UTC m=+86.337136893 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744948 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.744966 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744971 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.744991 3556 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.745067 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745121 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.745104018 +0000 UTC m=+86.337336278 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745161 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745210 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.74520019 +0000 UTC m=+86.337432190 (durationBeforeRetry 32s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745258 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.745226211 +0000 UTC m=+86.337458211 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745258 3556 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745307 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.745299392 +0000 UTC m=+86.337531392 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.745328 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.745592 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745431 3556 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.745654 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745667 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.74565209 +0000 UTC m=+86.337884090 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745698 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745740 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.745730702 +0000 UTC m=+86.337962702 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745777 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745805 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745825 3556 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745831 3556 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745866 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.745855955 +0000 UTC m=+86.338087955 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.745780 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.745888 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.745877545 +0000 UTC m=+86.338109545 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.745963 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746050 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746084 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746118 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746135 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746124282 +0000 UTC m=+86.338356292 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746138 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746174 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746159543 +0000 UTC m=+86.338391543 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746200 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746210 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746266 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746300 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746313 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746315 3556 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746338 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746342 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746327327 +0000 UTC m=+86.338559337 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746372 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746361337 +0000 UTC m=+86.338593337 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746400 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746390548 +0000 UTC m=+86.338622548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746408 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746433 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746437 3556 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746450 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746434069 +0000 UTC m=+86.338666069 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746501 3556 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746542 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746533071 +0000 UTC m=+86.338765071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746529 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746574 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746562222 +0000 UTC m=+86.338794212 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746599 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746646 3556 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746657 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746639584 +0000 UTC m=+86.338871624 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746678 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.746669764 +0000 UTC m=+86.338901764 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746610 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746758 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746833 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746892 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746925 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746931 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.74692138 +0000 UTC m=+86.339153380 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.746979 3556 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.746995 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.747048 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.747038853 +0000 UTC m=+86.339270853 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.747063 3556 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.747108 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.747121 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.747108084 +0000 UTC m=+86.339340084 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.747147 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.747138575 +0000 UTC m=+86.339370565 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.848867 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.849037 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.849221 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849239 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849287 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849311 3556 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849422 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.84939171 +0000 UTC m=+86.441623740 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849420 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849458 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849500 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849511 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849530 3556 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849533 3556 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.849622 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849675 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.849632046 +0000 UTC m=+86.441864086 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849723 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.849702488 +0000 UTC m=+86.441934708 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849748 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849772 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849787 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.849836 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.849821201 +0000 UTC m=+86.442053231 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.850248 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.850480 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.850609 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.850742 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.850769 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.850884 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.850912 3556 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.850918 3556 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.850939 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.850993 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.850973887 +0000 UTC m=+86.443206057 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.850988 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851054 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851072 3556 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851077 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.85105141 +0000 UTC m=+86.443283620 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851162 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.851132522 +0000 UTC m=+86.443364522 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851181 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851216 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851229 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851292 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.851270045 +0000 UTC m=+86.443502035 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.851443 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.851518 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851688 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851733 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851737 3556 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851757 3556 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851766 3556 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851778 3556 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851848 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.851826847 +0000 UTC m=+86.444058877 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.851901 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.851890019 +0000 UTC m=+86.444122019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.852319 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.852456 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.852492 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.852509 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.852585 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.852553475 +0000 UTC m=+86.444785505 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.912970 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913150 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913234 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913251 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913050 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913187 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913358 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913385 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913371 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913443 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913383 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913442 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913528 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913577 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913603 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913536 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913634 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.913664 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913703 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913734 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.913554 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913794 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913809 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913857 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913806 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.913878 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.914090 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.914103 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.914398 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.914406 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.914529 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.914596 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.914847 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.915003 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.915247 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.915281 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.915398 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.915546 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.915598 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.915756 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.915758 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.915945 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.916336 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.916689 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.916969 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.917160 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.917307 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.917413 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.917555 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.917647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.918061 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.918217 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.918366 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.918362 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.918518 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.918984 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.919077 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.919153 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.920199 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.920761 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.920894 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.920956 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.921098 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.921208 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.921276 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.921369 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.954900 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.955078 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.955100 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.955113 3556 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.955237 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.955217009 +0000 UTC m=+86.547448989 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: I1128 00:13:12.955421 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.956305 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.956369 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.956391 3556 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:12 crc kubenswrapper[3556]: E1128 00:13:12.956504 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-11-28 00:13:44.956474539 +0000 UTC m=+86.548706559 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.058001 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.058190 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.058586 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.058599 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.058559 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.058664 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:13:45.058646351 +0000 UTC m=+86.650878331 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.058846 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.059003 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059091 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059108 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059117 3556 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059125 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059153 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:13:45.059144353 +0000 UTC m=+86.651376343 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059171 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059194 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059274 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:45.059249865 +0000 UTC m=+86.651481895 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059492 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059569 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059599 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.059745 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-11-28 00:13:45.059707357 +0000 UTC m=+86.651939477 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.265293 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:13 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:13 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:13 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.265407 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912439 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912764 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912461 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912465 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912500 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912510 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912526 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912558 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912557 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912590 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912592 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912607 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912634 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:13 crc kubenswrapper[3556]: I1128 00:13:13.912647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.913587 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.913867 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914049 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914079 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914178 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914334 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914464 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914528 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914628 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914690 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914747 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914837 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914904 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:13 crc kubenswrapper[3556]: E1128 00:13:13.914966 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.135612 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.265313 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:14 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:14 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:14 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.265457 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.912917 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.912970 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913004 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913044 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913102 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913162 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913112 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.912944 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913196 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913219 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913121 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913258 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913183 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913116 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913185 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913343 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913314 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913370 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913432 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913492 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913534 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913551 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913578 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913593 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913614 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913644 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913648 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913714 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913781 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.913789 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.913823 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.913981 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914122 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914182 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914399 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914447 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914550 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914656 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914804 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914838 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.914881 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.914971 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.915053 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915116 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915208 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915283 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915357 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915433 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915591 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915663 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915729 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:14 crc kubenswrapper[3556]: I1128 00:13:14.915788 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915858 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915916 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.915978 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.916082 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.916175 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.916260 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.916330 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.916403 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.916789 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.916877 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.916990 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.917098 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.917165 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:14 crc kubenswrapper[3556]: E1128 00:13:14.917245 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.264494 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:15 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:15 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:15 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.264638 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913057 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913430 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913539 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913564 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913580 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913198 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913197 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.913827 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913283 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913298 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913304 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913338 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.913201 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.914080 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914110 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914275 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914361 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914442 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914526 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914636 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914703 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914798 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914890 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.914970 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.915068 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.915141 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:15 crc kubenswrapper[3556]: I1128 00:13:15.916907 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:15 crc kubenswrapper[3556]: E1128 00:13:15.917775 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.266972 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:16 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:16 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:16 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.267136 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.912701 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.912727 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.912825 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.912861 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.912971 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913143 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913189 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913145 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913222 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913190 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913261 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913283 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913313 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913350 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913358 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913385 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913409 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913428 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913190 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913471 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.913482 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913358 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913216 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913550 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913400 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913570 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913474 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913629 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913527 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913414 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913799 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913809 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.913873 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.913869 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.913956 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.914481 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.914656 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.914815 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.915003 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.915231 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.915324 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.915555 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.915746 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.915866 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916136 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:16 crc kubenswrapper[3556]: I1128 00:13:16.912735 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916191 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916412 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916212 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916512 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916586 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916647 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916719 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916796 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916914 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.916988 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.917218 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.917302 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.917434 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.917541 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.917642 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.917767 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.917852 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.917965 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.918109 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:16 crc kubenswrapper[3556]: E1128 00:13:16.918269 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.266673 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:17 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:17 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:17 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.267007 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.912884 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.912924 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913032 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913134 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913157 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913204 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913217 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913257 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.913186 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913168 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913291 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913299 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.913411 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.913514 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913618 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.913687 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.913793 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.913883 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.913970 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.914080 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.914173 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.914272 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:17 crc kubenswrapper[3556]: I1128 00:13:17.914313 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.914375 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.914437 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.914486 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.914549 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:17 crc kubenswrapper[3556]: E1128 00:13:17.914636 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.265225 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:18 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:18 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:18 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.265844 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.683905 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.684104 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.684161 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.684203 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.684243 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.912863 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.912902 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.912954 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.914874 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.914888 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.914969 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915006 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915100 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.915282 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915309 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915325 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915387 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915438 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915386 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915401 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915417 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915563 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915598 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.915603 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.915670 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915724 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.915817 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915840 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915854 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.915929 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.915989 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.916071 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.916357 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.916435 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.916466 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.916354 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.916645 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.916739 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.916910 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.916981 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.917105 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.917196 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.917265 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.917346 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.917404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.917498 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.917875 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.918002 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.918118 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.918191 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.918213 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.918303 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.917916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:18 crc kubenswrapper[3556]: I1128 00:13:18.917916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.918924 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.919040 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.919318 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.919561 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.919999 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.920004 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.920239 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.920446 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.920538 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.920656 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.920862 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.920967 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.921226 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.921251 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.921428 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.921307 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:18 crc kubenswrapper[3556]: E1128 00:13:18.921701 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.265222 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:19 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:19 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:19 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.265355 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912419 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912462 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912534 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912549 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912621 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912718 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912453 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912750 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912795 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912678 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912842 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912746 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.912992 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.913177 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.913323 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.913464 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:19 crc kubenswrapper[3556]: I1128 00:13:19.913740 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.913765 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.913781 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.913985 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.914127 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.914320 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.914455 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.914618 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.914722 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.914824 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.914893 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:19 crc kubenswrapper[3556]: E1128 00:13:19.915116 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.265129 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:20 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:20 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:20 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.265223 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.912742 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.912842 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.912957 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913364 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.913377 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913423 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913490 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913451 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913576 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913600 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913621 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.913715 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.914151 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.914358 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.914456 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.914656 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.914888 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.914906 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.914996 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.915211 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915234 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915251 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915306 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.915485 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915495 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.915685 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915699 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915770 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915848 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.915855 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915913 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.915947 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.916074 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.916212 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.916268 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.916365 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.916465 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.916613 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.916776 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.916901 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.917075 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.917203 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.917254 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.917389 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.917544 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.917596 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.917672 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.917759 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.917818 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.917936 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:20 crc kubenswrapper[3556]: I1128 00:13:20.918050 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.918149 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.918269 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.918356 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.918735 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.918844 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.918945 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.920208 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.920282 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.920327 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.920442 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.920547 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.920537 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.920747 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:20 crc kubenswrapper[3556]: E1128 00:13:20.920816 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.264892 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:21 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:21 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:21 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.265040 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912160 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912251 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912349 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912440 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912535 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912565 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912596 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912633 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912457 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912653 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912457 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912563 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.912662 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.912775 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:21 crc kubenswrapper[3556]: I1128 00:13:21.913040 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.913006 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.913101 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.913219 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.913418 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.913618 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.913670 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.913818 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.913961 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.914121 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.914258 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.914332 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.914438 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:21 crc kubenswrapper[3556]: E1128 00:13:21.914565 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.264080 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:22 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:22 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:22 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.264200 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.912136 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.912621 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.912766 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.912859 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.912941 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913047 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.912732 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913130 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.913145 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913215 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.913260 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913331 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913412 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913499 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.913530 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913585 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913633 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913702 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913732 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.913741 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913796 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913835 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913915 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.913975 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914082 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.914084 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914221 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.914256 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.914363 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914370 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914397 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914547 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.914621 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914656 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914704 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914741 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.914755 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914854 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914899 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.914920 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.914973 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.915051 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.915106 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.915108 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:22 crc kubenswrapper[3556]: I1128 00:13:22.915228 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.915263 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.915400 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.915576 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.915843 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.916185 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.916228 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.916373 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.916504 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.917594 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.917940 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.918055 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.918153 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.918280 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.918305 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.918490 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.918552 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.918844 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.919083 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.919290 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.919417 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:22 crc kubenswrapper[3556]: E1128 00:13:22.919539 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.264958 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:23 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:23 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:23 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.265078 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912389 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912500 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912522 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912571 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.912666 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912688 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912864 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.912921 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912945 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912928 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912982 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.912936 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.913073 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.913341 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.913615 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.913664 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.913786 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.913846 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.913898 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.913947 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:23 crc kubenswrapper[3556]: I1128 00:13:23.913979 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.914107 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.914200 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.914287 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.914352 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.914442 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:23 crc kubenswrapper[3556]: E1128 00:13:23.914516 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.265356 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:24 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:24 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:24 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.265810 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913048 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913147 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913324 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.913392 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913394 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913478 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913667 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913686 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913707 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913061 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913759 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913810 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913822 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913873 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.913908 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913940 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.913948 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.913997 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914039 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914126 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.914134 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914176 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914189 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.914241 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914261 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.914311 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914331 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914349 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914388 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914407 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914514 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.914487 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.914621 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914701 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.914805 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.914926 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.914985 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.915083 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.915326 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.915368 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.915421 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.915370 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.915525 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.915624 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.915815 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.915859 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.915908 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.916114 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.916136 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:24 crc kubenswrapper[3556]: I1128 00:13:24.916176 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.916279 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.916455 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.916550 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.916634 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.916842 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917037 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917125 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917283 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917409 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917441 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917526 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917625 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917728 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.917948 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.918057 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:24 crc kubenswrapper[3556]: E1128 00:13:24.918136 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.265131 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:25 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:25 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:25 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.265295 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.912327 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.912362 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.912500 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.912586 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.912724 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.912826 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.912879 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.913076 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.913092 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.913167 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.913206 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.913228 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.913390 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.913515 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.913650 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.913710 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.913847 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.913905 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:25 crc kubenswrapper[3556]: I1128 00:13:25.913995 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.914144 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.914232 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.914334 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.914457 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.914569 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.914698 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.914810 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.914959 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:25 crc kubenswrapper[3556]: E1128 00:13:25.915128 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.271194 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:26 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:26 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:26 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.271750 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913084 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913214 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913305 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913355 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913425 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913528 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913582 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.913619 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913730 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913795 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913550 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913848 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913904 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.913939 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.913994 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.914052 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914133 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914141 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914170 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914298 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914334 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914364 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.914384 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.914194 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914455 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914258 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914471 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914482 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914275 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914527 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914414 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914413 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.914432 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.914693 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.914813 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.914905 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.915088 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.915197 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.915268 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.915471 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.915613 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.915668 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.915742 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.915805 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.915901 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.916061 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.916208 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.916262 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.916371 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.916494 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.916587 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.916764 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.916888 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:26 crc kubenswrapper[3556]: I1128 00:13:26.917046 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.917128 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.917323 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.917433 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.917534 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.917717 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.917785 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.917897 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.917964 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.918174 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.918265 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.918342 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:26 crc kubenswrapper[3556]: E1128 00:13:26.918471 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.264856 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:27 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:27 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:27 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.265074 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.407421 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.407510 3556 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="6e48d427ed2b5ca2c86082810b5594169678d94b73922fdf6c408e4bbe775561" exitCode=1 Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.407550 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"6e48d427ed2b5ca2c86082810b5594169678d94b73922fdf6c408e4bbe775561"} Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.408217 3556 scope.go:117] "RemoveContainer" containerID="6e48d427ed2b5ca2c86082810b5594169678d94b73922fdf6c408e4bbe775561" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.912918 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.913134 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.913379 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.913468 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.913605 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.913704 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.913839 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.913933 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.914098 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.914204 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.914349 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.914440 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.914587 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.914683 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.914818 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.914905 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.915069 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.915161 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.915300 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.915400 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.915545 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.915636 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.915808 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.915905 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.916069 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.916155 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:27 crc kubenswrapper[3556]: I1128 00:13:27.916332 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:27 crc kubenswrapper[3556]: E1128 00:13:27.916437 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.264270 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:28 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:28 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:28 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.264726 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.412303 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.412561 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"90dd7dbcf1699d6c2dd098e8bad21d98d61147b5b5812093844f54c0f01e65f5"} Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912222 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912296 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912995 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912329 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912323 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912362 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912390 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912400 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912414 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912435 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912442 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912446 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912467 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912494 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912509 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912525 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912531 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912553 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912564 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912585 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912605 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912599 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912607 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912638 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912654 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912655 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912669 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912714 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912716 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912719 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912725 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:28 crc kubenswrapper[3556]: I1128 00:13:28.912765 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.916553 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.917107 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.917222 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.917412 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.917549 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.917755 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.918449 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.918620 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.918628 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.918771 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.918885 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.918994 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.919666 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.919172 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.919225 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.919342 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.919466 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.919557 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.919918 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.920113 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.920467 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.920585 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.920495 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.920710 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.920815 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.920899 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.920997 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.921111 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.921214 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.921274 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.921471 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.921554 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:28 crc kubenswrapper[3556]: E1128 00:13:28.921637 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.264718 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:29 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:29 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:29 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.264815 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913121 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913162 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913204 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913297 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913130 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913158 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913426 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.913454 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913518 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913536 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913596 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913622 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913518 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913693 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.913594 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:29 crc kubenswrapper[3556]: I1128 00:13:29.913805 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.913987 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.914136 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.914504 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.914588 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.914730 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.914925 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.915053 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.915239 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.915302 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.915455 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.915581 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:29 crc kubenswrapper[3556]: E1128 00:13:29.915707 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.265357 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:30 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:30 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:30 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.265704 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912436 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912493 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912539 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912584 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912585 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912659 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912673 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912720 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912493 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912729 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912759 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.912919 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912786 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.913051 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.913070 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.913079 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912751 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.913144 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.913155 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.912926 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.913910 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.914187 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.914271 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.914810 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.914831 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.915071 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.914980 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.915967 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.917104 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.917722 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.918229 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.918516 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.918980 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.919290 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.919668 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.919909 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.920271 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.920409 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.920582 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.920843 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.921167 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.921715 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.921965 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.922427 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.922503 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.922526 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:30 crc kubenswrapper[3556]: I1128 00:13:30.922631 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.922678 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.922860 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.923211 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.923228 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.923518 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.924888 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.925099 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.925263 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.925387 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.925527 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.925673 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.925841 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.925963 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.926188 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.926304 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.926498 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.926606 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.926731 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:30 crc kubenswrapper[3556]: E1128 00:13:30.926938 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.264389 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:31 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:31 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:31 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.264472 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.912845 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.912885 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.912943 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.912958 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913073 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913118 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913157 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913118 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.912861 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.913271 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913135 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913165 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913293 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.913438 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913442 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.913650 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:31 crc kubenswrapper[3556]: I1128 00:13:31.913671 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.913946 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.914111 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.914244 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.914412 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.914559 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.914705 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.914868 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.915083 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.915183 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.915308 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:31 crc kubenswrapper[3556]: E1128 00:13:31.915474 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.264856 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:32 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:32 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:32 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.264981 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.912736 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.912772 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.912827 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.912911 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.912972 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.913037 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.912979 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.912928 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.913107 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913501 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913578 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913653 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913715 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913772 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913832 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913889 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913933 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.913997 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.914320 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.914671 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.914793 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.914937 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.914991 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.915190 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.915270 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.915407 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.915469 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.915597 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.915738 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.915914 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.915922 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.915972 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916069 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.916197 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916196 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916244 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916361 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916420 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916462 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916630 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916658 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916715 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.916744 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.916970 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.917001 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.917239 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.917335 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.917603 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.917673 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.917762 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.917778 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.917815 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.917824 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.917921 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.917998 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.918179 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.918337 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.918418 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.918543 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.918676 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.918789 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.918869 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.918964 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.919186 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.919490 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:32 crc kubenswrapper[3556]: E1128 00:13:32.919836 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:32 crc kubenswrapper[3556]: I1128 00:13:32.924429 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.265226 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:33 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:33 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:33 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.265368 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.912797 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.912952 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913115 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913175 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913327 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.913367 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913381 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913489 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.913491 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913523 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913381 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913617 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913405 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913628 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913556 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.913766 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:33 crc kubenswrapper[3556]: I1128 00:13:33.913820 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.913857 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.914168 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.914462 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.914566 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.914725 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.914770 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.914833 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.914979 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.915142 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.915193 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:33 crc kubenswrapper[3556]: E1128 00:13:33.915307 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.264697 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:34 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:34 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:34 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.264827 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912335 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912412 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912372 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912428 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912501 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912475 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912563 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912594 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912659 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912690 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912730 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912691 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912777 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912776 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912813 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912806 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912856 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912858 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912758 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912849 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912601 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912736 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.912994 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913042 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913083 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913086 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913174 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913209 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913209 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913113 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913399 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.913407 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.913524 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:34 crc kubenswrapper[3556]: I1128 00:13:34.913628 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.913683 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.913842 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.913988 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.914260 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.914434 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.914710 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.914833 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.914948 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.915139 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.915223 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.915429 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.915506 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.915681 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.915721 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.915767 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.915893 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.916072 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.916203 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.916420 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.916519 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.916594 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.916704 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.916937 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.917053 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.917171 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.917269 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.917443 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.917616 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.917643 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.917707 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:34 crc kubenswrapper[3556]: E1128 00:13:34.917765 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.265501 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:35 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:35 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:35 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.265638 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.912663 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.912757 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.912773 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.912916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.912935 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.912979 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.913087 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.912950 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.913092 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.913136 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.913172 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.913139 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.913274 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.913326 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.913445 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.913587 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:35 crc kubenswrapper[3556]: I1128 00:13:35.913718 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.913912 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.914072 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.914172 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.914372 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.914555 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.914727 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.915171 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.915269 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.915431 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.915572 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:35 crc kubenswrapper[3556]: E1128 00:13:35.915723 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.265509 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:36 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:36 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:36 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.265624 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912336 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912839 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912357 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912959 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912990 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913053 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912401 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.913123 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913139 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912444 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913231 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912712 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913276 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912689 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.913313 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912762 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912781 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912769 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913421 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.913443 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912792 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912802 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913514 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.913546 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912799 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913614 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.912822 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913384 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913664 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913686 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913711 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.913771 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.913769 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913775 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913803 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913887 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913914 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913938 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:36 crc kubenswrapper[3556]: I1128 00:13:36.913966 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.914066 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.914147 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.914617 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.914698 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.914798 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.914884 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.914993 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.916124 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.916414 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.916581 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.916716 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.916882 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.917066 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.917215 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.917351 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.917612 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.917779 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.918087 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.918155 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.918291 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.918406 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.918576 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.918693 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.919076 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.919202 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.919374 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:36 crc kubenswrapper[3556]: E1128 00:13:36.919501 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.265137 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:37 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:37 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:37 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.265305 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913054 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913137 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913165 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913191 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.913346 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913355 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913262 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913411 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913463 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913507 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913478 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913545 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913759 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.913589 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913658 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:37 crc kubenswrapper[3556]: I1128 00:13:37.913680 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.913995 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.914141 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.914208 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.914379 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.914587 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.914746 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.914876 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.915128 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.915229 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.915349 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.915508 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:37 crc kubenswrapper[3556]: E1128 00:13:37.915684 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.264189 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:38 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:38 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:38 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.264282 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.912535 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.912563 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.912595 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.912651 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.912701 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.912548 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.912861 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.914687 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.914706 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.914767 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.914812 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.914770 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.914852 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.914960 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.915086 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915125 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915187 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.915241 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915273 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915315 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915376 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915423 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.915478 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915509 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915556 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.915607 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.915641 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.915841 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.915969 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.916084 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.916317 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.916343 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.916352 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.916405 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.916446 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.916472 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.916590 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.916762 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.916895 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.916952 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.916984 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917079 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917139 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917214 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917301 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917413 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917529 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.917594 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917664 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.917695 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917758 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917821 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.917952 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.918191 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.918234 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.918433 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.918477 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:38 crc kubenswrapper[3556]: I1128 00:13:38.918487 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.918375 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.918846 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.919136 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.919328 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.919425 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.919539 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.919796 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:38 crc kubenswrapper[3556]: E1128 00:13:38.919922 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.265237 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:39 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:39 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:39 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.265366 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.912908 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.912983 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.913028 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.912920 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.912926 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.913254 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.913382 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.913527 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.913653 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.913760 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.913955 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.914130 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.914222 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.914374 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.914454 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.914594 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.914689 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.914869 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.914945 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.915127 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.915230 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.915361 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.915440 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.915579 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.915637 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.915773 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:39 crc kubenswrapper[3556]: I1128 00:13:39.915826 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:39 crc kubenswrapper[3556]: E1128 00:13:39.915957 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.264777 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:40 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:40 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:40 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.264951 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913058 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913106 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913167 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913213 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913240 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913309 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913329 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913341 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913364 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913172 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913467 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913475 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913488 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.913479 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913568 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913566 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913638 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913660 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913686 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913700 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913716 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913713 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913670 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913749 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913701 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913582 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.913876 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.913907 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.914155 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.914188 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.914247 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.914363 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.914415 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.914553 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.914620 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.914727 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.915096 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.915146 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.915246 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.915360 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.915425 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:40 crc kubenswrapper[3556]: I1128 00:13:40.915434 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.915580 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.915685 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.915694 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.915993 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.916152 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.916245 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.916421 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.916496 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.916582 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.916794 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.916978 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.917187 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.917234 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.917292 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.917485 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.917810 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.917900 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.918080 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.918120 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.918245 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.918486 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.918866 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.918941 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:40 crc kubenswrapper[3556]: E1128 00:13:40.919059 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.265075 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:41 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:41 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:41 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.265218 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913091 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913151 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913186 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913228 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913313 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913166 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913411 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913416 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.913588 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.913818 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913900 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.913997 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.914165 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.914183 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.914245 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.914347 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.914425 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:41 crc kubenswrapper[3556]: I1128 00:13:41.914456 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.914530 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.914644 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.914774 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.914943 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.915169 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.915300 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.915382 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.915554 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.915657 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:41 crc kubenswrapper[3556]: E1128 00:13:41.915784 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.266753 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:42 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:42 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:42 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.266863 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.913228 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.913609 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.913714 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.913846 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.913916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.914085 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.914176 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.914322 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.914399 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.914503 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.914608 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.914675 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.914615 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.915064 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.915337 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916197 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916311 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.916418 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.915617 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.915697 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916940 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.916948 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916044 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916108 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.917136 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.917286 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916154 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916614 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.916686 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916757 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.916810 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.917512 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.917461 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.917699 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.917892 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.917894 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.917975 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.918174 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.918353 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.918322 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.918512 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.918601 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.915808 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.918694 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.918749 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.918788 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.919126 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.919309 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.919350 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.919454 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.919208 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.919646 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.919829 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.919974 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.920450 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.920859 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.921041 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.921142 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.921228 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.921316 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.921355 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.921698 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.922124 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.922498 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:42 crc kubenswrapper[3556]: I1128 00:13:42.922605 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:42 crc kubenswrapper[3556]: E1128 00:13:42.923062 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.264692 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:43 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:43 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:43 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.264827 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912672 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.913054 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912761 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912761 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912852 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912854 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912917 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912918 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912925 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912926 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.912991 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.913064 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.913088 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:43 crc kubenswrapper[3556]: I1128 00:13:43.913091 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.914561 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.914758 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.914930 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.915086 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.915261 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.915444 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.915622 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.915729 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.916169 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.916206 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.916507 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.916615 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.916750 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:43 crc kubenswrapper[3556]: E1128 00:13:43.916942 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.265703 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:44 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:44 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:44 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.265847 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809274 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809325 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809352 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809376 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809406 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809434 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809458 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809490 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.809519 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809600 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809675 3556 secret.go:194] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809737 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.809713634 +0000 UTC m=+150.401945624 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809758 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.809748875 +0000 UTC m=+150.401980865 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809804 3556 secret.go:194] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809832 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.809823427 +0000 UTC m=+150.402055417 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809871 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809894 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.809887188 +0000 UTC m=+150.402119178 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809929 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809952 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls podName:297ab9b6-2186-4d5b-a952-2bfd59af63c4 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.809946929 +0000 UTC m=+150.402178919 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls") pod "machine-config-controller-6df6df6b6b-58shh" (UID: "297ab9b6-2186-4d5b-a952-2bfd59af63c4") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809988 3556 secret.go:194] Couldn't get secret openshift-image-registry/installation-pull-secrets: object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.810026 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.810003591 +0000 UTC m=+150.402235581 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "installation-pull-secrets" (UniqueName: "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"installation-pull-secrets" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.810060 3556 configmap.go:199] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.810080 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.810074832 +0000 UTC m=+150.402306822 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810112 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810136 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810182 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810210 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810237 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810259 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810282 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810305 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810330 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810361 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810386 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810407 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810432 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810457 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810481 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810505 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810529 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810551 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810586 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810609 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810642 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810664 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810691 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810715 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810739 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810762 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810787 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810810 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810834 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810857 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810881 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.809624 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810905 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.810947 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.810984 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.810944092 +0000 UTC m=+150.403176122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811036 3556 secret.go:194] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811063 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811056335 +0000 UTC m=+150.403288325 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : object "openshift-dns-operator"/"metrics-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811096 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811117 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811111186 +0000 UTC m=+150.403343176 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.811127 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811144 3556 configmap.go:199] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.811188 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811220 3556 projected.go:294] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811235 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.811255 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811265 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811258039 +0000 UTC m=+150.403490029 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811300 3556 configmap.go:199] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.811305 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811321 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811314901 +0000 UTC m=+150.403546891 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811363 3556 projected.go:294] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811371 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.811379 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811392 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811386672 +0000 UTC m=+150.403618662 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811426 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811448 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811442163 +0000 UTC m=+150.403674153 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.811427 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811496 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811511 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811523 3556 projected.go:294] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811534 3556 projected.go:200] Error preparing data for projected volume kube-api-access-j7zrh for pod openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811545 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811529965 +0000 UTC m=+150.403761995 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811574 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811558856 +0000 UTC m=+150.403790886 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7zrh" (UniqueName: "kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811584 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811596 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811603 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tvc4r for pod openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.810991 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811626 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811618607 +0000 UTC m=+150.403850597 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tvc4r" (UniqueName: "kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811660 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811648158 +0000 UTC m=+150.403880188 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811668 3556 secret.go:194] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811693 3556 configmap.go:199] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811714 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811702969 +0000 UTC m=+150.403934999 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-client" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811746 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81173397 +0000 UTC m=+150.403966000 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811750 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811785 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811799 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811806 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tf29r for pod openshift-marketplace/redhat-marketplace-8s8pc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811831 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r podName:c782cf62-a827-4677-b3c2-6f82c5f09cbb nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811821382 +0000 UTC m=+150.404053372 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tf29r" (UniqueName: "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r") pod "redhat-marketplace-8s8pc" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811786 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811849 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2nz92 for pod openshift-console/console-644bb77b49-5x5xk: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811876 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92 podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811867373 +0000 UTC m=+150.404099353 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2nz92" (UniqueName: "kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811877 3556 secret.go:194] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811909 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811903444 +0000 UTC m=+150.404135434 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811944 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811982 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812043 3556 configmap.go:199] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812048 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.811990456 +0000 UTC m=+150.404222646 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812080 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812067218 +0000 UTC m=+150.404299238 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812106 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812092979 +0000 UTC m=+150.404324999 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812117 3556 secret.go:194] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812132 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812119949 +0000 UTC m=+150.404351969 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812157 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81214385 +0000 UTC m=+150.404375870 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812166 3556 projected.go:294] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812178 3556 projected.go:294] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812186 3556 projected.go:200] Error preparing data for projected volume kube-api-access-7ggjm for pod openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812223 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812260 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812285 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812277573 +0000 UTC m=+150.404509563 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812323 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812336 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812344 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ncrf5 for pod openshift-marketplace/certified-operators-7287f: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.812327 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812403 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812405 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812357275 +0000 UTC m=+150.404589455 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7ggjm" (UniqueName: "kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812486 3556 configmap.go:199] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812485 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812469607 +0000 UTC m=+150.404701637 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812509 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812574 3556 secret.go:194] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.811955 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812605 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812664 3556 configmap.go:199] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812738 3556 secret.go:194] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812747 3556 secret.go:194] Couldn't get secret openshift-console-operator/webhook-serving-cert: object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812769 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812789 3556 secret.go:194] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812511 3556 secret.go:194] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812792 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812531 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5 podName:887d596e-c519-4bfa-af90-3edd9e1b2f0f nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812512948 +0000 UTC m=+150.404744968 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncrf5" (UniqueName: "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5") pod "certified-operators-7287f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812452 3556 secret.go:194] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812870 3556 configmap.go:199] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.812919 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812086 3556 configmap.go:199] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812947 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812916977 +0000 UTC m=+150.405149247 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812532 3556 secret.go:194] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812801 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812990 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert podName:c1620f19-8aa3-45cf-931b-7ae0e5cd14cf nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.812981658 +0000 UTC m=+150.405213648 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert") pod "kube-controller-manager-operator-6f6cb54958-rbddb" (UID: "c1620f19-8aa3-45cf-931b-7ae0e5cd14cf") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.812997 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.812613 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813049 3556 projected.go:200] Error preparing data for projected volume kube-api-access-fqnmc for pod openshift-console-operator/console-conversion-webhook-595f9969b-l6z49: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813003 3556 projected.go:200] Error preparing data for projected volume kube-api-access-rg2zg for pod openshift-marketplace/marketplace-operator-8b455464d-f9xdt: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813067 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81305934 +0000 UTC m=+150.405291330 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813085 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813089 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813079831 +0000 UTC m=+150.405311821 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"trusted-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813053 3556 secret.go:194] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813118 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813109141 +0000 UTC m=+150.405341131 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813136 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813128302 +0000 UTC m=+150.405360292 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813150 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813144092 +0000 UTC m=+150.405376082 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813170 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813159742 +0000 UTC m=+150.405391732 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813189 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813179693 +0000 UTC m=+150.405411683 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813203 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813197403 +0000 UTC m=+150.405429393 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"trusted-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813214 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813208664 +0000 UTC m=+150.405440654 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default-metrics-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813225 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs podName:d5025cb4-ddb0-4107-88c1-bcbcdb779ac0 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813219554 +0000 UTC m=+150.405451544 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs") pod "multus-admission-controller-6c7c885997-4hbbc" (UID: "d5025cb4-ddb0-4107-88c1-bcbcdb779ac0") : object "openshift-multus"/"multus-admission-controller-secret" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813238 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813231644 +0000 UTC m=+150.405463634 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-serving-cert" (UniqueName: "kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : object "openshift-console-operator"/"webhook-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813256 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813246374 +0000 UTC m=+150.405478364 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"encryption-config-1" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813268 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813262185 +0000 UTC m=+150.405494175 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-session" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813292 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813325 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813348 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813372 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813406 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813379217 +0000 UTC m=+150.405611247 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rg2zg" (UniqueName: "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813419 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813443 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc podName:59748b9b-c309-4712-aa85-bb38d71c4915 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813428889 +0000 UTC m=+150.405660919 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fqnmc" (UniqueName: "kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc") pod "console-conversion-webhook-595f9969b-l6z49" (UID: "59748b9b-c309-4712-aa85-bb38d71c4915") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813457 3556 projected.go:269] Couldn't get secret openshift-image-registry/image-registry-tls: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813466 3556 projected.go:200] Error preparing data for projected volume registry-tls for pod openshift-image-registry/image-registry-75779c45fd-v2j2v: object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813477 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81346094 +0000 UTC m=+150.405692970 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813446 3556 projected.go:294] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813501 3556 projected.go:200] Error preparing data for projected volume kube-api-access-l8bxr for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813505 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813492031 +0000 UTC m=+150.405724061 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813527 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr podName:0f394926-bdb9-425c-b36e-264d7fd34550 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813521541 +0000 UTC m=+150.405753531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l8bxr" (UniqueName: "kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr") pod "openshift-controller-manager-operator-7978d7d7f6-2nt8z" (UID: "0f394926-bdb9-425c-b36e-264d7fd34550") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813429 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813553 3556 configmap.go:199] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813565 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813585 3556 projected.go:200] Error preparing data for projected volume kube-api-access-55f7t for pod openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813560 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813611 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813625 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813632 3556 projected.go:200] Error preparing data for projected volume kube-api-access-qcxcp for pod openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813569 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls podName:f9a7bc46-2f44-4aff-9cb5-97c97a4a8319 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813564102 +0000 UTC m=+150.405796092 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "registry-tls" (UniqueName: "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : object "openshift-image-registry"/"image-registry-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813658 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp podName:d0f40333-c860-4c04-8058-a0bf572dcf12 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813648954 +0000 UTC m=+150.405880944 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qcxcp" (UniqueName: "kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp") pod "network-check-source-5c5478f8c-vqvt7" (UID: "d0f40333-c860-4c04-8058-a0bf572dcf12") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813676 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813667195 +0000 UTC m=+150.405899185 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-cabundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813690 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813683135 +0000 UTC m=+150.405915125 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55f7t" (UniqueName: "kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813713 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813740 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813767 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813798 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813804 3556 secret.go:194] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813826 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.813857 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813868 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.813848079 +0000 UTC m=+150.406080309 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.813872 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814114 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814129 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814136 3556 projected.go:200] Error preparing data for projected volume kube-api-access-wrd8h for pod openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814137 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814120485 +0000 UTC m=+150.406352515 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814141 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814164 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814157836 +0000 UTC m=+150.406389826 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wrd8h" (UniqueName: "kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814210 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814221 3556 projected.go:294] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814221 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814234 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814259 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814265 3556 projected.go:294] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814274 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kgvs for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814283 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814317 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814331 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814333 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814331 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs podName:9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814314889 +0000 UTC m=+150.406547099 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kgvs" (UniqueName: "kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs") pod "kube-storage-version-migrator-operator-686c6c748c-qbnnr" (UID: "9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814228 3556 projected.go:200] Error preparing data for projected volume kube-api-access-dt5cx for pod openshift-ingress-canary/ingress-canary-2vhcn: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814391 3556 secret.go:194] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814404 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx podName:0b5d722a-1123-4935-9740-52a08d018bc9 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814390701 +0000 UTC m=+150.406622951 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dt5cx" (UniqueName: "kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx") pod "ingress-canary-2vhcn" (UID: "0b5d722a-1123-4935-9740-52a08d018bc9") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814465 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814491 3556 secret.go:194] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814501 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814277 3556 projected.go:294] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814467 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert podName:ed024e5d-8fc2-4c22-803d-73f3c9795f19 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814459722 +0000 UTC m=+150.406691712 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert") pod "kube-apiserver-operator-78d54458c4-sc8h7" (UID: "ed024e5d-8fc2-4c22-803d-73f3c9795f19") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814532 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8dcvj for pod openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814556 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814540024 +0000 UTC m=+150.406772064 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814341 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hjlhw for pod openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814581 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert podName:8a5ae51d-d173-4531-8975-f164c975ce1f nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814568325 +0000 UTC m=+150.406800355 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert") pod "catalog-operator-857456c46-7f5wf" (UID: "8a5ae51d-d173-4531-8975-f164c975ce1f") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814411 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814611 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814596845 +0000 UTC m=+150.406828885 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"image-import-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814647 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814634446 +0000 UTC m=+150.406866476 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dcvj" (UniqueName: "kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.814676 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.814659197 +0000 UTC m=+150.406891227 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hjlhw" (UniqueName: "kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814719 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814774 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814829 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814885 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814934 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.814987 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815073 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815133 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815212 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815263 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815285 3556 configmap.go:199] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815306 3556 secret.go:194] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815325 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815345 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815353 3556 configmap.go:199] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815318 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config podName:ebf09b15-4bb1-44bf-9d54-e76fad5cf76e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.815310212 +0000 UTC m=+150.407542202 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config") pod "authentication-operator-7cc7ff75d5-g9qv8" (UID: "ebf09b15-4bb1-44bf-9d54-e76fad5cf76e") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815405 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert podName:530553aa-0a1d-423e-8a22-f5eb4bdbb883 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.815397764 +0000 UTC m=+150.407629754 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert") pod "openshift-config-operator-77658b5b66-dq5sc" (UID: "530553aa-0a1d-423e-8a22-f5eb4bdbb883") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815429 3556 secret.go:194] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815441 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.815435385 +0000 UTC m=+150.407667375 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : object "openshift-console-operator"/"console-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815451 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815460 3556 configmap.go:199] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815479 3556 projected.go:294] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815431 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815496 3556 projected.go:200] Error preparing data for projected volume kube-api-access-hqmhq for pod openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815266 3556 configmap.go:199] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815460 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.815450395 +0000 UTC m=+150.407682385 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815561 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815570 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.815554227 +0000 UTC m=+150.407786257 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815599 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.815584908 +0000 UTC m=+150.407816938 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815625 3556 secret.go:194] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815647 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815664 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815672 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls podName:120b38dc-8236-4fa6-a452-642b8ad738ee nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81565648 +0000 UTC m=+150.407888710 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls") pod "machine-config-operator-76788bff89-wkjgm" (UID: "120b38dc-8236-4fa6-a452-642b8ad738ee") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815696 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81568871 +0000 UTC m=+150.407920700 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815701 3556 projected.go:294] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815725 3556 projected.go:294] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815744 3556 projected.go:200] Error preparing data for projected volume kube-api-access-8hpxx for pod openshift-apiserver/apiserver-7fc54b8dd7-d2bhp: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815794 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815806 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.815841 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815855 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.815838734 +0000 UTC m=+150.408070764 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815885 3556 secret.go:194] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815906 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.815872094 +0000 UTC m=+150.408104334 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8hpxx" (UniqueName: "kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.815909 3556 secret.go:194] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.816000 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816082 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.816047628 +0000 UTC m=+150.408279788 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : object "openshift-ingress-operator"/"metrics-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816124 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816130 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81610734 +0000 UTC m=+150.408339610 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816169 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq podName:cf1a8966-f594-490a-9fbb-eec5bafd13d3 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.816150011 +0000 UTC m=+150.408382031 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqmhq" (UniqueName: "kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq") pod "migrator-f7c6d88df-q2fnv" (UID: "cf1a8966-f594-490a-9fbb-eec5bafd13d3") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816216 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca podName:3482be94-0cdb-4e2a-889b-e5fac59fdbf5 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.816201162 +0000 UTC m=+150.408433192 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca") pod "marketplace-operator-8b455464d-f9xdt" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816246 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.816232723 +0000 UTC m=+150.408464743 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.816293 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816340 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.816325965 +0000 UTC m=+150.408557995 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"audit-1" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816359 3556 configmap.go:199] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816422 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume podName:13045510-8717-4a71-ade4-be95a76440a7 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.816405957 +0000 UTC m=+150.408638177 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume") pod "dns-default-gbw49" (UID: "13045510-8717-4a71-ade4-be95a76440a7") : object "openshift-dns"/"dns-default" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.816524 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.816584 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.816636 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.816687 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816749 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816750 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816781 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.816773555 +0000 UTC m=+150.409005545 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-serving-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816784 3556 projected.go:294] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816781 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-idp-0-file-data: object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816806 3556 projected.go:200] Error preparing data for projected volume kube-api-access-pkhl4 for pod openshift-controller-manager/controller-manager-778975cc4f-x5vcf: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.816819 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816853 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.816837747 +0000 UTC m=+150.409069947 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-idp-0-file-data" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-idp-0-file-data" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.816918 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816919 3556 configmap.go:199] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.816991 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81697334 +0000 UTC m=+150.409205510 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817049 3556 secret.go:194] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817073 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817125 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817104783 +0000 UTC m=+150.409336803 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817131 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817199 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817198 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817183094 +0000 UTC m=+150.409415304 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817279 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817282 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817319 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4 podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817299487 +0000 UTC m=+150.409531737 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pkhl4" (UniqueName: "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817360 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817338068 +0000 UTC m=+150.409570318 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817362 3556 secret.go:194] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817425 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817435 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81741555 +0000 UTC m=+150.409647580 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817495 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817549 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817561 3556 secret.go:194] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817601 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817622 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817604484 +0000 UTC m=+150.409836714 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"etcd-client" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817623 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817693 3556 configmap.go:199] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817698 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817682656 +0000 UTC m=+150.409914676 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817773 3556 secret.go:194] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817815 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817818 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817805849 +0000 UTC m=+150.410037879 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : object "openshift-service-ca-operator"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817889 3556 configmap.go:199] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817893 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817893 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817937 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817913572 +0000 UTC m=+150.410145592 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : object "openshift-route-controller-manager"/"client-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.817985 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817992 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.817989 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.817969263 +0000 UTC m=+150.410201283 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"audit" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818084 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818097 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818076285 +0000 UTC m=+150.410308315 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"etcd-client" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818146 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818187 3556 secret.go:194] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818233 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818217218 +0000 UTC m=+150.410449238 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818233 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818286 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert podName:c085412c-b875-46c9-ae3e-e6b0d8067091 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818267039 +0000 UTC m=+150.410499069 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert") pod "olm-operator-6d8474f75f-x54mh" (UID: "c085412c-b875-46c9-ae3e-e6b0d8067091") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818304 3556 secret.go:194] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818322 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81830646 +0000 UTC m=+150.410538480 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"client-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818369 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818408 3556 configmap.go:199] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818426 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818460 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818433053 +0000 UTC m=+150.410665073 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"console-oauth-config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818521 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818528 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818508505 +0000 UTC m=+150.410740535 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818530 3556 configmap.go:199] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818582 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818612 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818592136 +0000 UTC m=+150.410824386 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"oauth-serving-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818670 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818679 3556 secret.go:194] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818721 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818735 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection podName:01feb2e0-a0f4-4573-8335-34e364e0ef40 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818719719 +0000 UTC m=+150.410951749 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection") pod "oauth-openshift-74fc7c67cc-xqf8b" (UID: "01feb2e0-a0f4-4573-8335-34e364e0ef40") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818765 3556 configmap.go:199] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818778 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.81876257 +0000 UTC m=+150.410994600 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"audit-1" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818828 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca podName:9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818808581 +0000 UTC m=+150.411040831 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca") pod "console-644bb77b49-5x5xk" (UID: "9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1") : object "openshift-console"/"service-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818831 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818898 3556 secret.go:194] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.818922 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.818954 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.818939744 +0000 UTC m=+150.411171774 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : object "openshift-service-ca"/"signing-key" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.819000 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.819088 3556 configmap.go:199] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819002 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.819096 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.819080118 +0000 UTC m=+150.411312148 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"config" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819177 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819233 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819287 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819335 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819390 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819437 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819488 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819537 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819601 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819647 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.819712 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.819851 3556 secret.go:194] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.819915 3556 configmap.go:199] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.819932 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.819910397 +0000 UTC m=+150.412142427 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.819977 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images podName:4f8aa612-9da0-4a2b-911e-6a1764a4e74e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.819954018 +0000 UTC m=+150.412186048 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images") pod "machine-api-operator-788b7c6b6c-ctdmb" (UID: "4f8aa612-9da0-4a2b-911e-6a1764a4e74e") : object "openshift-machine-api"/"machine-api-operator-images" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820071 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/kube-root-ca.crt: object "hostpath-provisioner"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820094 3556 projected.go:294] Couldn't get configMap hostpath-provisioner/openshift-service-ca.crt: object "hostpath-provisioner"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820115 3556 projected.go:200] Error preparing data for projected volume kube-api-access-vvtrv for pod hostpath-provisioner/csi-hostpathplugin-hvm8g: [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820163 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820187 3556 secret.go:194] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820204 3556 projected.go:294] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820226 3556 configmap.go:199] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820167 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv podName:12e733dd-0939-4f1b-9cbb-13897e093787 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820152382 +0000 UTC m=+150.412384412 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vvtrv" (UniqueName: "kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv") pod "csi-hostpathplugin-hvm8g" (UID: "12e733dd-0939-4f1b-9cbb-13897e093787") : [object "hostpath-provisioner"/"kube-root-ca.crt" not registered, object "hostpath-provisioner"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820264 3556 secret.go:194] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820275 3556 secret.go:194] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820287 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert podName:bd556935-a077-45df-ba3f-d42c39326ccd nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820272655 +0000 UTC m=+150.412504675 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert") pod "packageserver-8464bcc55b-sjnqz" (UID: "bd556935-a077-45df-ba3f-d42c39326ccd") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820303 3556 configmap.go:199] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.819860 3556 configmap.go:199] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820229 3556 projected.go:200] Error preparing data for projected volume kube-api-access-4w8wh for pod openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820315 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle podName:41e8708a-e40d-4d28-846b-c52eda4d1755 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820300905 +0000 UTC m=+150.412532925 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle") pod "apiserver-7fc54b8dd7-d2bhp" (UID: "41e8708a-e40d-4d28-846b-c52eda4d1755") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820431 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820409598 +0000 UTC m=+150.412641628 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820471 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls podName:b54e8941-2fc4-432a-9e51-39684df9089e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.82045165 +0000 UTC m=+150.412683880 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls") pod "cluster-image-registry-operator-7769bd8d7d-q5cvv" (UID: "b54e8941-2fc4-432a-9e51-39684df9089e") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820525 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs podName:a702c6d2-4dde-4077-ab8c-0f8df804bf7a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820512511 +0000 UTC m=+150.412744531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs") pod "network-metrics-daemon-qdfr4" (UID: "a702c6d2-4dde-4077-ab8c-0f8df804bf7a") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820560 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820541212 +0000 UTC m=+150.412773472 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820626 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles podName:1a3e81c3-c292-4130-9436-f94062c91efd nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820603843 +0000 UTC m=+150.412836003 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles") pod "controller-manager-778975cc4f-x5vcf" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd") : object "openshift-controller-manager"/"openshift-global-ca" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820672 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh podName:5bacb25d-97b6-4491-8fb4-99feae1d802a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820657954 +0000 UTC m=+150.412889974 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4w8wh" (UniqueName: "kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh") pod "apiserver-69c565c9b6-vbdpd" (UID: "5bacb25d-97b6-4491-8fb4-99feae1d802a") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.820735 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.820687505 +0000 UTC m=+150.412919745 (durationBeforeRetry 1m4s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912209 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912295 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912366 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.912435 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912501 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912571 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912683 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912717 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912764 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.912823 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912833 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912887 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.912937 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912939 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.912966 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913078 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.913139 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.913193 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913254 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.913297 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.913348 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913403 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.913449 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913515 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.913566 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913623 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.912681 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913708 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913781 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913860 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.913942 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.914232 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914279 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.914325 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.914513 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914526 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914560 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.914682 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914719 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914756 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914769 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914811 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914848 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914863 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.914882 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914902 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914906 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914947 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.914975 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.915063 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.915158 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.915238 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.915294 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.915474 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.915593 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.915701 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.915828 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.915909 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.916043 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.916132 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.916216 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.916320 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.916415 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.916737 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.916790 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.916889 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.920944 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.921199 3556 projected.go:294] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.921248 3556 projected.go:294] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.921274 3556 projected.go:200] Error preparing data for projected volume kube-api-access-5rpl7 for pod openshift-console-operator/console-operator-5dbbc74dc9-cp5cd: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.921274 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.921434 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.921473 3556 projected.go:294] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.921363 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7 podName:e9127708-ccfd-4891-8a3a-f0cacb77e0f4 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.921332541 +0000 UTC m=+150.513564571 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5rpl7" (UniqueName: "kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7") pod "console-operator-5dbbc74dc9-cp5cd" (UID: "e9127708-ccfd-4891-8a3a-f0cacb77e0f4") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.921493 3556 projected.go:200] Error preparing data for projected volume kube-api-access-tl5kg for pod openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.921574 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg podName:7d51f445-054a-4e4f-a67b-a828f5a32511 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.921545076 +0000 UTC m=+150.513777096 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tl5kg" (UniqueName: "kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg") pod "ingress-operator-7d46d5bb6d-rrg6t" (UID: "7d51f445-054a-4e4f-a67b-a828f5a32511") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.921940 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922128 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922175 3556 projected.go:294] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922195 3556 projected.go:200] Error preparing data for projected volume kube-api-access-nf4t2 for pod openshift-dns-operator/dns-operator-75f687757b-nz2xb: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922270 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2 podName:10603adc-d495-423c-9459-4caa405960bb nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.922250441 +0000 UTC m=+150.514482461 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf4t2" (UniqueName: "kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2") pod "dns-operator-75f687757b-nz2xb" (UID: "10603adc-d495-423c-9459-4caa405960bb") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.922330 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.922464 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922511 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.922526 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922541 3556 projected.go:294] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922557 3556 projected.go:200] Error preparing data for projected volume kube-api-access-d9vhj for pod openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922610 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj podName:6d67253e-2acd-4bc1-8185-793587da4f17 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.922594139 +0000 UTC m=+150.514826239 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d9vhj" (UniqueName: "kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj") pod "service-ca-operator-546b4f8984-pwccz" (UID: "6d67253e-2acd-4bc1-8185-793587da4f17") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922677 3556 projected.go:294] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922697 3556 projected.go:200] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922714 3556 projected.go:294] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922749 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access podName:71af81a9-7d43-49b2-9287-c375900aa905 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.922734023 +0000 UTC m=+150.514966053 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access") pod "openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" (UID: "71af81a9-7d43-49b2-9287-c375900aa905") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922752 3556 projected.go:294] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922778 3556 projected.go:200] Error preparing data for projected volume kube-api-access-2zpsk for pod openshift-console/downloads-65476884b9-9wcvx: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.922842 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk podName:6268b7fe-8910-4505-b404-6f1df638105c nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.922823695 +0000 UTC m=+150.515055725 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2zpsk" (UniqueName: "kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk") pod "downloads-65476884b9-9wcvx" (UID: "6268b7fe-8910-4505-b404-6f1df638105c") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.923057 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.923136 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.923216 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.923242 3556 projected.go:294] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.923260 3556 projected.go:200] Error preparing data for projected volume kube-api-access-76gl8 for pod openshift-network-diagnostics/network-check-target-v54bt: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.923325 3556 projected.go:294] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.923348 3556 projected.go:294] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.923361 3556 projected.go:200] Error preparing data for projected volume kube-api-access-js87r for pod openshift-service-ca/service-ca-666f99b6f-kk8kg: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.923497 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r podName:e4a7de23-6134-4044-902a-0900dc04a501 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.923476429 +0000 UTC m=+150.515708449 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-js87r" (UniqueName: "kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r") pod "service-ca-666f99b6f-kk8kg" (UID: "e4a7de23-6134-4044-902a-0900dc04a501") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.923551 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8 podName:34a48baf-1bee-4921-8bb2-9b7320e76f79 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.92353344 +0000 UTC m=+150.515765470 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76gl8" (UniqueName: "kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8") pod "network-check-target-v54bt" (UID: "34a48baf-1bee-4921-8bb2-9b7320e76f79") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.924296 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924421 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924456 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924475 3556 projected.go:200] Error preparing data for projected volume kube-api-access-ptdrb for pod openshift-marketplace/redhat-operators-f4jkp: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924530 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb podName:4092a9f8-5acc-4932-9e90-ef962eeb301a nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.924512753 +0000 UTC m=+150.516744783 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptdrb" (UniqueName: "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb") pod "redhat-operators-f4jkp" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.924551 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924680 3556 projected.go:294] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924699 3556 projected.go:294] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924709 3556 projected.go:200] Error preparing data for projected volume kube-api-access-bm986 for pod openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: I1128 00:13:44.924716 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924762 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986 podName:45a8038e-e7f2-4d93-a6f5-7753aa54e63f nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.924736838 +0000 UTC m=+150.516968828 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bm986" (UniqueName: "kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986") pod "control-plane-machine-set-operator-649bd778b4-tt5tw" (UID: "45a8038e-e7f2-4d93-a6f5-7753aa54e63f") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924830 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924855 3556 projected.go:294] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924870 3556 projected.go:200] Error preparing data for projected volume kube-api-access-v7vkr for pod openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:44 crc kubenswrapper[3556]: E1128 00:13:44.924940 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr podName:21d29937-debd-4407-b2b1-d1053cb0f342 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:48.924924202 +0000 UTC m=+150.517156222 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7vkr" (UniqueName: "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr") pod "route-controller-manager-776b8b7477-sfpvs" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.026655 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.026991 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.027085 3556 projected.go:294] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.027107 3556 projected.go:200] Error preparing data for projected volume kube-api-access-lx2h9 for pod openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.027214 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9 podName:43ae1c37-047b-4ee2-9fee-41e337dd4ac8 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:49.027182305 +0000 UTC m=+150.619414335 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lx2h9" (UniqueName: "kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9") pod "openshift-apiserver-operator-7c88c4c865-kn67m" (UID: "43ae1c37-047b-4ee2-9fee-41e337dd4ac8") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.027534 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.027765 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.028194 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.028248 3556 projected.go:200] Error preparing data for projected volume kube-api-access-n6sqt for pod openshift-marketplace/community-operators-8jhz6: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.028418 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt podName:3f4dca86-e6ee-4ec9-8324-86aff960225e nodeName:}" failed. No retries permitted until 2025-11-28 00:14:49.028373902 +0000 UTC m=+150.620605932 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6sqt" (UniqueName: "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt") pod "community-operators-8jhz6" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.132322 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.132402 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.132457 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.132581 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.132786 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.132875 3556 projected.go:294] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.132887 3556 projected.go:294] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.132964 3556 projected.go:294] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.132993 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9p8gt for pod openshift-marketplace/community-operators-sdddl: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133081 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.132908 3556 projected.go:200] Error preparing data for projected volume kube-api-access-9724w for pod openshift-etcd-operator/etcd-operator-768d5b5d86-722mg: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133123 3556 projected.go:294] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133142 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt podName:fc9c9ba0-fcbb-4e78-8cf5-a059ec435760 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:49.133107912 +0000 UTC m=+150.725339932 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9p8gt" (UniqueName: "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt") pod "community-operators-sdddl" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133146 3556 projected.go:200] Error preparing data for projected volume kube-api-access-x5d97 for pod openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133081 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133239 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97 podName:63eb7413-02c3-4d6e-bb48-e5ffe5ce15be nodeName:}" failed. No retries permitted until 2025-11-28 00:14:49.133211594 +0000 UTC m=+150.725443624 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x5d97" (UniqueName: "kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97") pod "package-server-manager-84d578d794-jw7r2" (UID: "63eb7413-02c3-4d6e-bb48-e5ffe5ce15be") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133243 3556 projected.go:294] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133267 3556 projected.go:200] Error preparing data for projected volume kube-api-access-6kp86 for pod openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133288 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w podName:0b5c38ff-1fa8-4219-994d-15776acd4a4d nodeName:}" failed. No retries permitted until 2025-11-28 00:14:49.133270885 +0000 UTC m=+150.725502915 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9724w" (UniqueName: "kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w") pod "etcd-operator-768d5b5d86-722mg" (UID: "0b5c38ff-1fa8-4219-994d-15776acd4a4d") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.133415 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86 podName:f728c15e-d8de-4a9a-a3ea-fdcead95cb91 nodeName:}" failed. No retries permitted until 2025-11-28 00:14:49.133398978 +0000 UTC m=+150.725631188 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6kp86" (UniqueName: "kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86") pod "cluster-samples-operator-bc474d5d6-wshwg" (UID: "f728c15e-d8de-4a9a-a3ea-fdcead95cb91") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.264562 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:45 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:45 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:45 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.264717 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912399 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912434 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912937 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912478 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912525 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912558 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912592 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912623 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912658 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912689 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912716 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912761 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912793 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:45 crc kubenswrapper[3556]: I1128 00:13:45.912835 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.915048 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.915584 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.915765 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.915914 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.915963 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916078 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916179 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916285 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916371 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916426 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916517 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916610 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916680 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:45 crc kubenswrapper[3556]: E1128 00:13:45.916774 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.265062 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:46 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:46 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:46 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.265149 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912211 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912249 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912300 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912375 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912321 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.912468 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912476 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912525 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912596 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912640 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912654 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912613 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912640 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912684 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912656 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912775 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912817 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912900 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912943 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912958 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912988 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913051 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.912971 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913094 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912844 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913162 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913213 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.912869 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913250 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913182 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913376 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913592 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.913629 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.913698 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.913841 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.913997 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.914227 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:46 crc kubenswrapper[3556]: I1128 00:13:46.914341 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.914459 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.914601 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.914830 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.914954 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.915103 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.915272 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.915394 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.915511 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.915656 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.915805 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.915927 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.916074 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.916203 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.916630 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.916749 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.916792 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.917277 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.917310 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.917317 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.917357 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.917369 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.917440 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.917567 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.917829 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.918151 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.918602 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:46 crc kubenswrapper[3556]: E1128 00:13:46.919274 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.302336 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:47 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:47 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:47 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.303137 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.912546 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.912581 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.912653 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.912611 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.912795 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.912827 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.912872 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.913049 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.913096 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.913233 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.913868 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.913883 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.914049 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.914134 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.914334 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.915050 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.915216 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.915494 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.915565 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.915666 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.915783 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.915949 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.916099 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.916213 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:47 crc kubenswrapper[3556]: I1128 00:13:47.916324 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.916390 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.916612 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:47 crc kubenswrapper[3556]: E1128 00:13:47.916748 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.264563 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:48 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:48 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:48 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.265343 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.912685 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916586 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.916632 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916688 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916715 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916664 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916795 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.916803 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916819 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916846 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916855 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916814 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916897 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.916909 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917075 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.917080 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917124 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917133 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917161 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917262 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.917273 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917284 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917333 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.917509 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917571 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917575 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917631 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917727 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.917732 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.917830 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917862 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.917923 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.917957 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.918187 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.918459 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.918474 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.918566 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.918643 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.918813 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.918929 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.918991 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.919097 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.919141 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.919195 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.919405 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.919461 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.919444 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.919548 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.919660 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.919849 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.919935 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:48 crc kubenswrapper[3556]: I1128 00:13:48.919943 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920049 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920139 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920340 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920376 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920458 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920612 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920735 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920838 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.920926 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.921001 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.921138 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.921199 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:48 crc kubenswrapper[3556]: E1128 00:13:48.921273 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.265384 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:49 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:49 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:49 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.265517 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912230 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912251 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912308 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912385 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912457 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912495 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.912555 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912609 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912634 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912711 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.912762 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.912868 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.913080 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.913146 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.913223 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.913240 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.913279 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:49 crc kubenswrapper[3556]: I1128 00:13:49.913283 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.913460 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.913659 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.913924 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.914088 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.914197 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.914354 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.914690 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.914921 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.915269 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:49 crc kubenswrapper[3556]: E1128 00:13:49.915395 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.265355 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:50 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:50 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:50 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.265475 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912339 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912418 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912510 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912578 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912519 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912652 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912708 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.912736 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912736 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912826 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912885 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.912954 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912835 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.913050 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912851 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.913108 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912837 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.912855 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.913279 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.913295 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.913345 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.913508 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.913646 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.913733 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.913846 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.913901 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.914070 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.914084 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.914179 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.914241 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.914300 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.914483 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.914580 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.914727 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.914839 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.914954 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.915079 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.915192 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.915251 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.915261 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.915338 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.915405 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.915542 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.915623 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.915805 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.915967 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.915982 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.915989 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.916175 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.916229 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.916296 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.916354 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:50 crc kubenswrapper[3556]: I1128 00:13:50.916507 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.916600 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.916722 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.916808 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.916919 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.917033 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.917137 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.917342 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.917484 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.917532 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.917685 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.917752 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.917837 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:50 crc kubenswrapper[3556]: E1128 00:13:50.918315 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.264921 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:51 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:51 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:51 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.265088 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.912570 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.912738 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.912575 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.912644 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.913117 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913173 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913196 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.913287 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913286 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913375 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913134 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913375 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913478 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913135 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913473 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:51 crc kubenswrapper[3556]: I1128 00:13:51.913454 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.913853 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.914109 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.914272 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.914444 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.914640 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.914807 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.914998 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.915089 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.915226 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.915276 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.915369 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:51 crc kubenswrapper[3556]: E1128 00:13:51.915477 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.266131 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:52 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:52 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:52 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.266273 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912678 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912799 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912864 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912917 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912952 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913004 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913068 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912873 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913109 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912808 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913193 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913211 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913243 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913282 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913288 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913246 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913305 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913336 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913414 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913432 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913442 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913194 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913504 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913223 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913337 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913573 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912961 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913143 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912827 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913252 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.912877 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.913754 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.913339 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.913923 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.914144 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:52 crc kubenswrapper[3556]: I1128 00:13:52.914437 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.914490 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.914554 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.914677 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.914861 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.915081 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.915254 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.915393 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.915553 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.915796 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.915938 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.916194 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.916280 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.916398 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.916532 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.916707 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.916907 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.916964 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917084 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917187 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917477 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917529 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917574 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917615 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917662 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917787 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.917960 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.918163 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.918309 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.918488 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:52 crc kubenswrapper[3556]: E1128 00:13:52.918675 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.265216 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:53 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:53 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:53 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.265324 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.912092 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.912149 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.912120 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.912363 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.912946 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.913290 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913468 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913522 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913623 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913661 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913663 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913740 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.913863 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913890 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913867 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:53 crc kubenswrapper[3556]: I1128 00:13:53.913991 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.914199 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.914420 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.914558 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.914679 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.915127 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.915216 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.915359 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.915492 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.915669 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.915769 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.915860 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:53 crc kubenswrapper[3556]: E1128 00:13:53.916053 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.265218 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:54 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:54 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:54 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.265288 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.912420 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.912560 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.912645 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.912744 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.912761 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.912777 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.912816 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.912923 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.912928 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913113 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913150 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.913117 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913113 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913224 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913178 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913269 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913342 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.913450 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913488 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913562 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913587 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913575 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913628 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.913712 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913752 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913763 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913778 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913826 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.913830 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.914102 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.914250 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.914341 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.914398 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.914541 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.914614 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.914738 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.914935 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.914985 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.915130 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.915353 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.915515 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.915569 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.915663 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.915718 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.915783 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.915950 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.916167 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.916342 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.916487 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.916688 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.916854 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.917045 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:54 crc kubenswrapper[3556]: I1128 00:13:54.917121 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.917502 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.917626 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.917742 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.918038 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.918153 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.918408 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.918590 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.918778 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.918943 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.919089 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.919212 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.919387 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:54 crc kubenswrapper[3556]: E1128 00:13:54.919591 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.264995 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:55 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:55 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:55 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.265172 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912633 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912702 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912653 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912768 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912796 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912868 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912783 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912668 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912929 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912935 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912862 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912725 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912678 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:55 crc kubenswrapper[3556]: I1128 00:13:55.912824 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.913214 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.913603 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.914524 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.914674 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.914754 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.914836 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.914973 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.915195 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.915351 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.915515 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.915609 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.915709 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.915825 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:55 crc kubenswrapper[3556]: E1128 00:13:55.915967 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.264666 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:56 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:56 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:56 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.264749 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913198 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913279 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913316 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913331 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913232 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913229 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913420 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913458 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913506 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913556 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913576 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913604 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913611 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913563 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913523 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913458 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913680 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913706 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913466 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913742 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913300 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913795 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913469 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913519 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913839 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913518 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913573 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913584 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913528 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.913981 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.914241 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.914436 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.914588 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.914616 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.914718 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.914917 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.915132 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.915333 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.915498 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.915627 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.915757 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.915928 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.916070 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.916335 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.916509 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.916647 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.916860 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.916901 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.917161 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.917361 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.917397 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.917576 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.917769 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.917852 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.918043 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.918163 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.918207 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.918341 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.918510 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.918718 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:56 crc kubenswrapper[3556]: I1128 00:13:56.918951 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.919163 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.919505 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.919776 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.920051 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:56 crc kubenswrapper[3556]: E1128 00:13:56.920160 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.264417 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:57 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:57 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:57 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.264522 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.912150 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.912247 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.912257 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.912405 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.912515 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.912804 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.912814 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.912942 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.913127 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.913279 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.913279 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.913426 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.913133 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.913295 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.913303 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:57 crc kubenswrapper[3556]: I1128 00:13:57.913373 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.913568 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.914198 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.914538 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.914575 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.914694 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.914842 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.914975 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.915093 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.915243 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.915387 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.915413 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:57 crc kubenswrapper[3556]: E1128 00:13:57.915502 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.265001 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:58 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:58 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:58 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.265137 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.912857 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.912976 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913077 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913095 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913130 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913164 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913188 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913224 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913248 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913269 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.912985 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913325 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913484 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913496 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913407 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913331 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.913655 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.917267 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.917320 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.917465 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.917501 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.917540 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.917672 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.917750 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.917839 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.917980 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.918073 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.918098 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.918140 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.918236 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.919199 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.919231 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.919251 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.919360 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.919504 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.919574 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.919665 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.919838 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.919934 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.920086 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.920123 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.920275 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.920293 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.920403 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.920407 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.920577 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.920693 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.920824 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.921056 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.921089 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.921240 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:13:58 crc kubenswrapper[3556]: I1128 00:13:58.921322 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.921405 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.921469 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.921576 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.921725 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.921990 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.922216 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.922378 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.922514 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.922640 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.922749 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.922845 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.922983 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:13:58 crc kubenswrapper[3556]: E1128 00:13:58.923123 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.264858 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:13:59 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:13:59 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:13:59 crc kubenswrapper[3556]: healthz check failed Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.264972 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.912890 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.912890 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.912916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.912923 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.912958 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.912970 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.912992 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.913040 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.913067 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.913090 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.913131 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.913084 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.913203 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:13:59 crc kubenswrapper[3556]: I1128 00:13:59.913203 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.913788 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.914280 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.914428 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.914513 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.914631 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.914737 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.914910 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.915064 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.915264 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.915399 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.915586 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.915616 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.915705 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:13:59 crc kubenswrapper[3556]: E1128 00:13:59.915859 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.265629 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:00 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:00 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:00 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.265772 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912417 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912433 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912466 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912552 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912567 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912433 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912611 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912635 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912660 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912708 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912670 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912755 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912766 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912795 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.912935 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912968 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912989 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913089 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913111 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.913130 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913223 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.912954 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913000 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913430 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913465 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913305 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.913466 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.913611 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.913719 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913750 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.913849 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.913889 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.913984 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.914074 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.914143 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.914560 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.914581 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.914624 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.914713 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.914949 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915061 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915141 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915212 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.915264 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915384 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915481 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915553 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915629 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.915653 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915708 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915767 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.915797 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.915921 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.915984 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.916053 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.916124 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.916222 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:00 crc kubenswrapper[3556]: I1128 00:14:00.916288 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.916361 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.916438 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.916881 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.916911 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.917065 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.917148 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.917183 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:00 crc kubenswrapper[3556]: E1128 00:14:00.917261 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.264473 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:01 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:01 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:01 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.264565 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.912549 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.912655 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.912719 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.912576 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.912790 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.912916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.912924 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.912985 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.913059 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.913105 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.913153 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.913213 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.913345 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.913461 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.913542 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.913548 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.913802 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.913947 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:01 crc kubenswrapper[3556]: I1128 00:14:01.914292 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.914488 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.914637 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.914720 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.914781 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.914846 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.914910 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.914961 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.915025 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:01 crc kubenswrapper[3556]: E1128 00:14:01.915078 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.264214 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:02 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:02 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:02 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.264282 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912428 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912470 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912444 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912594 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912650 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912767 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912787 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.912792 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912854 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912878 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.912995 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913071 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913168 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913249 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913307 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.912526 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.913370 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913370 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913435 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.913440 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.913554 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913566 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913597 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.913673 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.913230 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913727 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.913753 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913808 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.913851 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913892 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.913926 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.913996 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.914061 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.914212 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.914397 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.914570 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.914645 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.914813 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.914878 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.914985 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.915158 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915229 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915281 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.915436 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.915514 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915527 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915550 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.915648 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915658 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.915886 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915922 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915933 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915971 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916026 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:02 crc kubenswrapper[3556]: I1128 00:14:02.915974 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916098 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916189 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916294 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916381 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916515 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916621 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916695 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916786 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916868 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.916946 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.917092 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:02 crc kubenswrapper[3556]: E1128 00:14:02.917883 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.265640 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:03 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:03 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:03 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.265761 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913146 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913308 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913441 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.913477 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913315 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.913709 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913716 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913788 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913885 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.913895 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913991 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.914066 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.913836 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.914163 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.914182 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.914278 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.914278 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.914332 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.914584 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:03 crc kubenswrapper[3556]: I1128 00:14:03.914651 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.914888 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.915118 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.915153 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.915301 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.915481 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.916196 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.916352 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:03 crc kubenswrapper[3556]: E1128 00:14:03.916539 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.265002 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:04 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:04 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:04 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.265173 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.912855 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.912949 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.912983 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913076 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913102 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913109 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.912983 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.912880 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913196 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913232 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913253 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913196 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913270 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913276 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.912855 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.913664 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913783 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.913788 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.913946 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.914736 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.914771 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.914828 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.914871 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.914879 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.914921 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.914953 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.915133 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.915158 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.915334 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.915410 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.915566 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.915666 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.915704 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.915879 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.916088 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.916092 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.916189 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.916203 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.916285 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.916322 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.916330 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.916424 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.916468 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.916566 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.916620 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.916758 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.916882 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917071 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917223 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917273 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:04 crc kubenswrapper[3556]: I1128 00:14:04.917344 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917416 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917494 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917634 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917726 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917808 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.917941 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918098 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918178 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918383 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918511 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918591 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918674 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918735 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918785 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:04 crc kubenswrapper[3556]: E1128 00:14:04.918840 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.264326 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:05 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:05 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:05 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.264452 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913051 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913102 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913183 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913246 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913323 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913451 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913526 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913567 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913526 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.913474 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.913464 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.913702 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.914175 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.914201 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.914228 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.914324 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.914353 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.914471 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:05 crc kubenswrapper[3556]: I1128 00:14:05.914534 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.914622 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.914929 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.915099 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.915314 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.915429 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.915573 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.915679 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.915785 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:05 crc kubenswrapper[3556]: E1128 00:14:05.916067 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.263517 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:06 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:06 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:06 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.263636 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913292 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913409 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913484 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913582 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913659 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.913662 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913349 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913732 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913839 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913911 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913977 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.913292 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.914181 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.914220 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.914455 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.914464 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.914563 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.914689 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.914890 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.915082 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.915147 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.915182 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.915251 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.915324 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.915323 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.915424 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.915448 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.915492 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.915574 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.915727 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.915800 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.915892 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.916058 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.916109 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.916263 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.916344 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.916499 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.916499 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.916605 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.916923 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.916949 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.917167 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.916963 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.917361 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.917403 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.917407 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.917115 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.917592 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.917668 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.917812 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.917985 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:06 crc kubenswrapper[3556]: I1128 00:14:06.918036 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.918225 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.918366 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.918504 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.918826 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.919218 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.919295 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.919561 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.920189 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.921229 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.921486 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.921618 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.921804 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.922080 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:06 crc kubenswrapper[3556]: E1128 00:14:06.922231 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.265488 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:07 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:07 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:07 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.265606 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.913006 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.913086 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.913161 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.913229 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.913186 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.913427 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.913507 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.913516 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.913832 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.913888 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.914236 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.914257 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.914455 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.914899 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.915062 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.915758 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.915387 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.915605 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.915971 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.916109 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.916218 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.916310 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.916475 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.916621 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.916706 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.917182 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.917256 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:07 crc kubenswrapper[3556]: E1128 00:14:07.917328 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:07 crc kubenswrapper[3556]: I1128 00:14:07.986884 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" probeResult="failure" output="" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.264621 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:08 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:08 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:08 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.264764 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.912316 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.912480 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.912531 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.912549 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.912629 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.916925 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.916936 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.916994 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917045 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917116 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917171 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917179 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.917252 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917305 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917350 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917376 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917396 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917422 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917475 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.917492 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.917517 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.917679 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.917875 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918066 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918087 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.918345 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.918407 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918453 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918476 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918516 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918555 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918584 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918593 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918630 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918678 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.918712 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918809 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918818 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.918935 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.918945 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.919210 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.919349 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.919482 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.919610 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.919676 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.919817 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.919915 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:08 crc kubenswrapper[3556]: I1128 00:14:08.919970 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920073 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920181 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920254 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920365 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920549 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920617 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920707 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920816 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.920927 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.921288 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.921474 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.921567 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.921820 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.922166 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.922296 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.922409 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.922457 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:08 crc kubenswrapper[3556]: E1128 00:14:08.922562 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.265265 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:09 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:09 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:09 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.265400 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.912886 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.912893 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913060 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913065 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913146 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913064 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913171 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913176 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913207 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.912974 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913085 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.912918 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.913387 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.913528 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.913705 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.913840 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.913998 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:09 crc kubenswrapper[3556]: I1128 00:14:09.914134 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.914209 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.914418 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.914723 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.914779 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.914991 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.915246 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.915370 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.915535 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.915625 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:09 crc kubenswrapper[3556]: E1128 00:14:09.915724 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.264684 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:10 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:10 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:10 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.264797 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912175 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912247 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912306 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.912708 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912727 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912738 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912799 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912728 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912342 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912932 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912955 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912968 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912371 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912371 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912389 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912408 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.913112 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912426 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912433 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912483 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.913195 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912461 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912501 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912494 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912517 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912540 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912547 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912549 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912566 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912306 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912590 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912617 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912870 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912956 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.913071 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:10 crc kubenswrapper[3556]: I1128 00:14:10.912490 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.913556 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.913632 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.913738 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.913902 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.914188 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.914486 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.914708 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.914899 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.914975 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.915122 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.915355 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.915485 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.915567 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.915697 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.916171 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.916322 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.916375 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.916412 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.916518 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.916671 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.916741 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.916920 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.917046 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.917219 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.917294 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.917430 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.917509 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.917652 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.917713 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:10 crc kubenswrapper[3556]: E1128 00:14:10.917816 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.265236 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:11 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:11 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:11 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.265386 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912455 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912528 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912569 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912659 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912662 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912453 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912500 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912781 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912788 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912686 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912942 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.912911 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.912961 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.913140 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.913150 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:11 crc kubenswrapper[3556]: I1128 00:14:11.913271 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.913565 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.913728 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.913907 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.914215 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.914301 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.914640 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.914743 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.914912 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.915181 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.915322 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.915445 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:11 crc kubenswrapper[3556]: E1128 00:14:11.915603 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.265831 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:12 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:12 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:12 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.265988 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912195 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912227 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912329 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.912489 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912501 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912557 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912422 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912595 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912588 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912731 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912732 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912739 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.912743 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912762 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912875 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912892 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.912958 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.912966 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.913116 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.913280 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.913415 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.913444 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.913487 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.913632 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.913661 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.913743 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.913750 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.913851 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.913874 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.913950 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914066 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.914234 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914326 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.914375 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914371 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914452 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914482 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914491 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914508 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914533 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914582 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:12 crc kubenswrapper[3556]: I1128 00:14:12.914618 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.914839 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.915123 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.915222 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.915424 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.915606 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.915756 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.915999 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.916145 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.916527 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.916672 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.916874 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.917245 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.917372 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.917547 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.917713 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.917874 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.918005 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.918185 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.918229 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.918307 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.918447 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.918547 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:12 crc kubenswrapper[3556]: E1128 00:14:12.918661 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.264802 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:13 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:13 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:13 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.264918 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.582635 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.583479 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/6.log" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.583547 3556 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="90dd7dbcf1699d6c2dd098e8bad21d98d61147b5b5812093844f54c0f01e65f5" exitCode=1 Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.583586 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"90dd7dbcf1699d6c2dd098e8bad21d98d61147b5b5812093844f54c0f01e65f5"} Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.583675 3556 scope.go:117] "RemoveContainer" containerID="6e48d427ed2b5ca2c86082810b5594169678d94b73922fdf6c408e4bbe775561" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.584511 3556 scope.go:117] "RemoveContainer" containerID="90dd7dbcf1699d6c2dd098e8bad21d98d61147b5b5812093844f54c0f01e65f5" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.585525 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.912591 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.912665 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.912674 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.912835 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.912591 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.913165 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.912622 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.913296 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.913229 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.913352 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.913523 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.913576 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.913680 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.913699 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.913719 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:13 crc kubenswrapper[3556]: I1128 00:14:13.913818 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.913914 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.914156 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.914457 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.914632 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.914761 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.914891 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.915145 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.915306 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.915541 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.915638 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.915740 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:13 crc kubenswrapper[3556]: E1128 00:14:13.916162 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.264630 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:14 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:14 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:14 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.264774 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.590171 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.912841 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.912872 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.912936 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.912985 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913064 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913124 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913136 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913160 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913187 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913171 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913224 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913068 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913242 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913269 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913078 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913169 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913310 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913284 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913311 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913349 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913390 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913423 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913226 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913426 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913339 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913619 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.913636 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913622 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913720 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.913646 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.913922 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.914154 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.914271 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.914442 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.914598 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.914771 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.914916 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.915213 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.915437 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.915641 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.915651 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.915966 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.916065 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.916164 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.916195 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.916310 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.916325 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.916546 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.916796 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.916848 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.916950 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:14 crc kubenswrapper[3556]: I1128 00:14:14.917064 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.917149 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.917251 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.917444 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.917591 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.917810 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.917881 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.917988 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.918158 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.918244 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.918335 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.918416 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.918560 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.918595 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:14 crc kubenswrapper[3556]: E1128 00:14:14.918665 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.264263 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:15 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:15 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:15 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.264796 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913182 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913264 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913291 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913385 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913394 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913438 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913199 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.913567 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913212 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.913450 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.913852 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.914047 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.914066 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.914145 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.914161 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.914195 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.914269 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.914447 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.914604 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:15 crc kubenswrapper[3556]: I1128 00:14:15.914681 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.914913 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.914952 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.915007 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.915152 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.915303 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.915372 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.915617 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:15 crc kubenswrapper[3556]: E1128 00:14:15.915963 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.264943 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:16 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:16 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:16 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.265066 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912263 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912736 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912746 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912307 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912767 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912387 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912400 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912447 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912467 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.913053 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912494 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912500 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.913231 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912517 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912533 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912539 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.913377 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912543 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912575 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.913481 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912583 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912591 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912590 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912630 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912629 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912632 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.913666 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912615 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912640 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.913758 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912651 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.913841 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912656 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912672 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912682 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912696 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914069 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912695 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.912704 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914223 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.913600 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914308 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914540 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914553 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914651 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914739 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914813 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.914918 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.915025 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.915111 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.915280 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:16 crc kubenswrapper[3556]: I1128 00:14:16.915300 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.915359 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.915693 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.915767 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.915814 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.915966 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.916132 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.916224 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.916302 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.916392 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.916450 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.916511 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.916584 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:16 crc kubenswrapper[3556]: E1128 00:14:16.916680 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.264546 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:17 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:17 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:17 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.264674 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.912628 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.912776 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.912783 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.912865 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.913004 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913220 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913256 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913302 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913340 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913273 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913409 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.913427 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913462 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913301 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.913592 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.913687 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.913843 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.913922 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914066 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914154 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914251 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:17 crc kubenswrapper[3556]: I1128 00:14:17.914308 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914403 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914476 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914539 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914607 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914662 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:17 crc kubenswrapper[3556]: E1128 00:14:17.914722 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.264065 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:18 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:18 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:18 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.264162 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.685338 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.685742 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.685824 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.685941 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.686109 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.772270 3556 kubelet_node_status.go:506] "Node not becoming ready in time after startup" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912434 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912537 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912578 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912597 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912651 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912603 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912672 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912750 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912764 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912778 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912565 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912802 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912851 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912752 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912891 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912929 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912924 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.913072 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912816 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912946 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912466 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912486 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912499 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912513 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912466 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912670 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912694 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912710 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912788 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912777 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912829 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:18 crc kubenswrapper[3556]: I1128 00:14:18.912839 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.914814 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.914972 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.915130 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.915261 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.915379 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.915484 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.915589 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.915729 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.915838 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.915926 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916069 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916243 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916340 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916440 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916544 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916695 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916757 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916854 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916900 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.916961 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917041 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917114 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917221 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917437 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917521 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917573 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917643 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917677 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917759 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.917830 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.918549 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.918634 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:18 crc kubenswrapper[3556]: E1128 00:14:18.918688 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.264770 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:19 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:19 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:19 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.264882 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.314608 3556 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.912975 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913077 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913101 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913130 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913177 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913225 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913298 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913331 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913453 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913297 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.913490 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913341 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.913600 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913603 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913686 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:19 crc kubenswrapper[3556]: I1128 00:14:19.913003 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.913835 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.913989 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.914186 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.914375 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.914542 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.914774 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.914953 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.915156 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.915254 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.915474 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.915596 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:19 crc kubenswrapper[3556]: E1128 00:14:19.915649 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.264819 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:20 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:20 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:20 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.264948 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912151 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912328 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912352 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912431 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912490 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912504 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912601 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.912622 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912634 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912596 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912470 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912839 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912846 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912860 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.912874 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.912902 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.913042 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.913221 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.913326 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.913446 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.913467 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.913506 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.913552 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.913683 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.913799 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.913845 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.913926 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.913958 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.914001 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.914094 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.914130 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.914243 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.914299 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.914371 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.914400 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.914446 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.914505 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.914538 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.914624 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.914715 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.914750 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.914804 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.914875 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.915793 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.916454 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.916562 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.916553 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.916717 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.916992 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.917287 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:20 crc kubenswrapper[3556]: I1128 00:14:20.917590 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.917818 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.918442 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.918680 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.918913 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.919206 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.919449 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.919688 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.919935 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.920290 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.920496 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.920723 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.920933 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.921217 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.917004 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:20 crc kubenswrapper[3556]: E1128 00:14:20.924627 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.266134 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:21 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:21 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:21 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.266260 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912274 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912373 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.912499 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912396 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912577 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912591 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912550 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912674 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912670 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.912889 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912981 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912733 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.912779 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.912787 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.913175 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.913231 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913273 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913174 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913377 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913476 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913559 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:21 crc kubenswrapper[3556]: I1128 00:14:21.913600 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913685 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913763 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913841 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913914 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.913985 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:21 crc kubenswrapper[3556]: E1128 00:14:21.914100 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.265355 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:22 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:22 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:22 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.265487 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.912913 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.912954 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.912918 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.912974 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913092 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913096 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.913219 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913237 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913173 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.913333 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913390 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913406 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.913555 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913566 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913611 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913637 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913696 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913701 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913735 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913750 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913853 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913859 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.913901 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.913700 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.913993 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914099 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914168 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914236 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914339 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914351 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914417 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914475 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914501 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914567 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914585 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914615 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914659 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914671 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914699 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914778 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914793 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914830 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914880 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.914922 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.914928 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915114 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915122 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915137 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915200 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915258 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915357 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.915379 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915444 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.915493 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915554 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:22 crc kubenswrapper[3556]: I1128 00:14:22.915579 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915659 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915724 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915770 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915824 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915880 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915928 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.915976 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.916048 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:22 crc kubenswrapper[3556]: E1128 00:14:22.916107 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.264480 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:23 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:23 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:23 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.264605 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.912480 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.912917 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.913321 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913416 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913462 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913519 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913509 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913604 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913696 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913733 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913781 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.913623 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.913833 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.913956 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.914182 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.914209 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.914307 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.914460 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:23 crc kubenswrapper[3556]: I1128 00:14:23.914769 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.914946 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.914953 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.915065 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.915084 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.915169 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.915455 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.915545 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:23 crc kubenswrapper[3556]: E1128 00:14:23.915574 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.265116 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:24 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:24 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:24 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.265589 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.316176 3556 kubelet.go:2906] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912320 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912374 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912424 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912495 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912515 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912552 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912519 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912562 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912433 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912344 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912722 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912760 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.912760 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.912896 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912914 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.912922 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913043 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913139 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.913145 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.913307 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913415 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913420 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913477 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913531 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913561 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913533 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913584 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913611 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913621 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913492 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913640 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913545 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913663 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913573 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913697 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:24 crc kubenswrapper[3556]: I1128 00:14:24.913701 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.914269 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.914305 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.914694 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.914834 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.915085 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.915244 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.915477 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.915494 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.915557 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.915740 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.915904 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.916005 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.916348 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.916394 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.916434 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.916734 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.916882 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.916928 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.916972 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917086 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917218 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917413 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917591 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917661 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917874 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917926 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917716 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917811 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:24 crc kubenswrapper[3556]: E1128 00:14:24.917986 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.264761 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:25 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:25 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:25 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.264878 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.912873 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.912960 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.912961 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.912994 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913003 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913056 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913254 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913283 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.913247 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913373 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913458 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.913620 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913653 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.913766 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913785 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.913929 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.914052 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.913828 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913856 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:25 crc kubenswrapper[3556]: I1128 00:14:25.913870 3556 scope.go:117] "RemoveContainer" containerID="90dd7dbcf1699d6c2dd098e8bad21d98d61147b5b5812093844f54c0f01e65f5" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.914372 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.914440 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.914603 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.914811 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.914905 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.915000 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.915101 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.915181 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:25 crc kubenswrapper[3556]: E1128 00:14:25.915292 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.265661 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:26 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:26 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:26 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.266323 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.631844 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.631957 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"b203e8ed09c9350b236814135962bdc19666470cae6146b3024fa04966e01b50"} Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.912942 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.912966 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913072 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913210 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.913224 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913206 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.912975 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913322 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.913413 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913426 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913527 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913591 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.913607 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913515 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913542 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.913700 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913551 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913669 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913468 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913875 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913882 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.913921 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.914170 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.914191 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.914259 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.914310 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.914317 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.914364 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.914462 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.914552 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.914649 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.914782 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.914885 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.914985 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.915121 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.915151 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.915348 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.915440 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.915440 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.915531 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.915573 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.915603 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.915632 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.915671 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.915686 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.915731 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.915751 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.915829 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.915918 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.915999 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.916112 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.916159 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.916219 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.916289 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:26 crc kubenswrapper[3556]: I1128 00:14:26.916351 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.916450 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.916551 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.916648 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.916723 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.916796 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.917028 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.917143 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.917249 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.917359 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.917514 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:26 crc kubenswrapper[3556]: E1128 00:14:26.917970 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.264695 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:27 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:27 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:27 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.264796 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.912627 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.912728 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.912866 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" podUID="4f8aa612-9da0-4a2b-911e-6a1764a4e74e" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.912878 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.912631 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.912935 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.912939 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.913137 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.913298 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.913317 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.913346 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.913357 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.913405 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.913361 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.913596 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" podUID="c1620f19-8aa3-45cf-931b-7ae0e5cd14cf" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.913647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:27 crc kubenswrapper[3556]: I1128 00:14:27.913494 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.913505 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" podUID="10603adc-d495-423c-9459-4caa405960bb" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.913753 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" podUID="d0f40333-c860-4c04-8058-a0bf572dcf12" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.913846 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" podUID="63eb7413-02c3-4d6e-bb48-e5ffe5ce15be" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.913932 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.914002 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" podUID="0f394926-bdb9-425c-b36e-264d7fd34550" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.914135 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.914223 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" podUID="01feb2e0-a0f4-4573-8335-34e364e0ef40" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.914288 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-v54bt" podUID="34a48baf-1bee-4921-8bb2-9b7320e76f79" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.914352 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.914428 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qdfr4" podUID="a702c6d2-4dde-4077-ab8c-0f8df804bf7a" Nov 28 00:14:27 crc kubenswrapper[3556]: E1128 00:14:27.914495 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" podUID="8a5ae51d-d173-4531-8975-f164c975ce1f" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.264841 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:28 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:28 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:28 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.264965 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913087 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913202 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913222 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913267 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913330 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913329 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913212 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913267 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913446 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913386 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913459 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913514 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913542 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913564 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913386 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.913428 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.914202 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.915938 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" podUID="9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.915984 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.915948 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916085 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916114 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916148 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916189 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916339 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.916372 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" podUID="45a8038e-e7f2-4d93-a6f5-7753aa54e63f" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916386 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.916525 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" podUID="cf1a8966-f594-490a-9fbb-eec5bafd13d3" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.916668 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" podUID="120b38dc-8236-4fa6-a452-642b8ad738ee" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916689 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.916805 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916867 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.916874 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.916990 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.917073 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.917121 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.917184 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.917342 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-gbw49" podUID="13045510-8717-4a71-ade4-be95a76440a7" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.917776 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.917784 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" podUID="5bacb25d-97b6-4491-8fb4-99feae1d802a" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918042 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918151 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" podUID="7d51f445-054a-4e4f-a67b-a828f5a32511" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918278 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918411 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918571 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" podUID="f728c15e-d8de-4a9a-a3ea-fdcead95cb91" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918676 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918713 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" podUID="e4a7de23-6134-4044-902a-0900dc04a501" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918848 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.918957 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" podUID="0b5c38ff-1fa8-4219-994d-15776acd4a4d" Nov 28 00:14:28 crc kubenswrapper[3556]: I1128 00:14:28.919064 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.919111 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.919203 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" podUID="43ae1c37-047b-4ee2-9fee-41e337dd4ac8" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.919280 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-2vhcn" podUID="0b5d722a-1123-4935-9740-52a08d018bc9" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.919465 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" podUID="b54e8941-2fc4-432a-9e51-39684df9089e" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.919512 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.919658 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" podUID="ebf09b15-4bb1-44bf-9d54-e76fad5cf76e" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.919768 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" podUID="71af81a9-7d43-49b2-9287-c375900aa905" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.919937 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" podUID="12e733dd-0939-4f1b-9cbb-13897e093787" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.920109 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" podUID="d5025cb4-ddb0-4107-88c1-bcbcdb779ac0" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.920165 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.920215 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" podUID="6d67253e-2acd-4bc1-8185-793587da4f17" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.920275 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.920342 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.920468 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.920651 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" podUID="297ab9b6-2186-4d5b-a952-2bfd59af63c4" Nov 28 00:14:28 crc kubenswrapper[3556]: E1128 00:14:28.920791 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" podUID="ed024e5d-8fc2-4c22-803d-73f3c9795f19" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.265114 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:29 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:29 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:29 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.265576 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.912443 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.912511 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.912524 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.912799 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.912804 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.912850 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.912886 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.912456 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.913057 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.913358 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.913369 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.913427 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.913749 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.914600 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.919811 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.919826 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.919946 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.920001 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.920082 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.920221 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.920376 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.932977 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.933807 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.937247 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.937537 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.937811 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938053 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938191 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938302 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938453 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938489 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938530 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938717 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938745 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.938914 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939030 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939160 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939164 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939275 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939311 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939338 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939359 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939435 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939437 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939222 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939540 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939254 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939583 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939642 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939659 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939602 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939553 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939743 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939773 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939183 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939832 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939836 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.939791 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.940198 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.941160 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.942054 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.943366 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.945621 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.946371 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.965993 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.967365 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.967740 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 00:14:29 crc kubenswrapper[3556]: I1128 00:14:29.973110 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.264921 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:30 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:30 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:30 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.265102 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913188 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913265 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913330 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913349 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913471 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913486 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913531 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913607 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913667 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.913689 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.914068 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.914117 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.914415 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.914460 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.914532 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.914623 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.915136 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.915292 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.915302 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.915461 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.915678 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.916062 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.921061 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.921247 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.921580 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.921840 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.922140 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.922296 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.923178 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.923569 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.923991 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.924103 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.924045 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.924459 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.924473 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.924765 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.924993 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.925644 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.926313 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.926625 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.927093 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.928063 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.929605 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.931133 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.932540 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.933254 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.934788 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.935098 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.938425 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.938602 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.938615 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.938806 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.938880 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.938948 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.939258 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.939379 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.939795 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.941106 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.942098 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.943792 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.944143 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.944322 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.944487 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.945560 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.945721 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.947491 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.947755 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.950944 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.951552 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.951753 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.951939 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.952146 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.952381 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.952657 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.952717 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.952826 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.952889 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.958159 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.958950 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.959191 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.959584 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.959795 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.960985 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961207 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961310 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961432 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961466 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961311 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961584 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961725 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961747 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961759 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961785 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961808 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961899 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.961910 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962060 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962106 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962127 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962158 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962235 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962260 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962276 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962339 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962455 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962476 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962681 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962890 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962931 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962976 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.962991 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.963069 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.963233 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.964772 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.964831 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.965106 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.965508 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.965602 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.965507 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.965774 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.966143 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.966382 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.966401 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.968208 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.968477 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.968743 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.980752 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.985283 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.989243 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.994891 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.995732 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 00:14:30 crc kubenswrapper[3556]: I1128 00:14:30.996838 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 00:14:31 crc kubenswrapper[3556]: I1128 00:14:31.000069 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 00:14:31 crc kubenswrapper[3556]: I1128 00:14:31.015395 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 00:14:31 crc kubenswrapper[3556]: I1128 00:14:31.035544 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Nov 28 00:14:31 crc kubenswrapper[3556]: I1128 00:14:31.055056 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 00:14:31 crc kubenswrapper[3556]: I1128 00:14:31.075726 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 00:14:31 crc kubenswrapper[3556]: I1128 00:14:31.270644 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:31 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:31 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:31 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:31 crc kubenswrapper[3556]: I1128 00:14:31.270728 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:32 crc kubenswrapper[3556]: I1128 00:14:32.265524 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:32 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:32 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:32 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:32 crc kubenswrapper[3556]: I1128 00:14:32.265644 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:32 crc kubenswrapper[3556]: I1128 00:14:32.910693 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:14:33 crc kubenswrapper[3556]: I1128 00:14:33.266660 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:33 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:33 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:33 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:33 crc kubenswrapper[3556]: I1128 00:14:33.266834 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:34 crc kubenswrapper[3556]: I1128 00:14:34.265293 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:34 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:34 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:34 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:34 crc kubenswrapper[3556]: I1128 00:14:34.265419 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:35 crc kubenswrapper[3556]: I1128 00:14:35.265176 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:35 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:35 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:35 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:35 crc kubenswrapper[3556]: I1128 00:14:35.265304 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:36 crc kubenswrapper[3556]: I1128 00:14:36.264745 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:36 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:36 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:36 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:36 crc kubenswrapper[3556]: I1128 00:14:36.264874 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:37 crc kubenswrapper[3556]: I1128 00:14:37.264224 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:37 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:37 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:37 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:37 crc kubenswrapper[3556]: I1128 00:14:37.264328 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:38 crc kubenswrapper[3556]: I1128 00:14:38.001243 3556 kubelet_node_status.go:729] "Recording event message for node" node="crc" event="NodeReady" Nov 28 00:14:38 crc kubenswrapper[3556]: I1128 00:14:38.264814 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:38 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:38 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:38 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:38 crc kubenswrapper[3556]: I1128 00:14:38.264929 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:39 crc kubenswrapper[3556]: I1128 00:14:39.265517 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:39 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:39 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:39 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:39 crc kubenswrapper[3556]: I1128 00:14:39.265613 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:40 crc kubenswrapper[3556]: I1128 00:14:40.265166 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:40 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:40 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:40 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:40 crc kubenswrapper[3556]: I1128 00:14:40.265321 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:41 crc kubenswrapper[3556]: I1128 00:14:41.264897 3556 patch_prober.go:28] interesting pod/router-default-5c9bf7bc58-6jctv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 00:14:41 crc kubenswrapper[3556]: [-]has-synced failed: reason withheld Nov 28 00:14:41 crc kubenswrapper[3556]: [+]process-running ok Nov 28 00:14:41 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:41 crc kubenswrapper[3556]: I1128 00:14:41.265048 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:41 crc kubenswrapper[3556]: I1128 00:14:41.265115 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:14:41 crc kubenswrapper[3556]: I1128 00:14:41.268129 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"26ea99a990c8b29e8794df03ad0ad41b98f38cf49bbad1e53ff53371275f3629"} pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" containerMessage="Container router failed startup probe, will be restarted" Nov 28 00:14:41 crc kubenswrapper[3556]: I1128 00:14:41.268230 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" podUID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerName="router" containerID="cri-o://26ea99a990c8b29e8794df03ad0ad41b98f38cf49bbad1e53ff53371275f3629" gracePeriod=3600 Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.833744 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.833843 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.833894 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.833942 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.833987 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834060 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834114 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834160 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834202 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834275 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834334 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834382 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834436 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834479 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834525 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834567 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834610 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834655 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834708 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834775 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834843 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.834927 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835051 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835109 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835154 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835202 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835250 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835320 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835365 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835431 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835473 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835520 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835562 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835606 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835656 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835720 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835766 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835833 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835891 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835929 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.835984 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.836005 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-service-ca\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.836196 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-oauth-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.836846 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.836035 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d67253e-2acd-4bc1-8185-793587da4f17-config\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.836970 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837038 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837082 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-trusted-ca-bundle\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837090 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837193 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837248 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837300 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837348 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837616 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837661 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837704 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837741 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-audit-policies\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837749 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837899 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.837974 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.838566 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.838717 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.838800 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.838957 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839141 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839243 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839297 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839316 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839383 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839450 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839517 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839586 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839661 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839731 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839819 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839891 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839990 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840135 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840247 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840325 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840400 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840496 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840603 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840682 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840758 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840842 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840910 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.840982 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841117 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-srv-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841131 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841213 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841215 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d67253e-2acd-4bc1-8185-793587da4f17-serving-cert\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841220 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-package-server-manager-serving-cert\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841272 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841324 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841373 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841417 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841471 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841524 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841566 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841607 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841621 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8wh\" (UniqueName: \"kubernetes.io/projected/5bacb25d-97b6-4491-8fb4-99feae1d802a-kube-api-access-4w8wh\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841651 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841701 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841745 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841787 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841828 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841868 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841911 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841953 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.841999 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:48 crc kubenswrapper[3556]: E1128 00:14:48.842067 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97 podName: nodeName:}" failed. No retries permitted until 2025-11-28 00:16:50.842043266 +0000 UTC m=+272.434275486 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97") pod "image-registry-75779c45fd-v2j2v" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842115 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842165 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842220 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842224 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b54e8941-2fc4-432a-9e51-39684df9089e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842267 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842332 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842380 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842424 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842474 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842518 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842566 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842582 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842622 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842670 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842710 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842769 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842814 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842875 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842916 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.843099 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.843187 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.843244 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.843310 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.844080 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-client\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.844793 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.844924 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10603adc-d495-423c-9459-4caa405960bb-metrics-tls\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.845377 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnmc\" (UniqueName: \"kubernetes.io/projected/59748b9b-c309-4712-aa85-bb38d71c4915-kube-api-access-fqnmc\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.845943 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-serving-cert\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.846107 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bxr\" (UniqueName: \"kubernetes.io/projected/0f394926-bdb9-425c-b36e-264d7fd34550-kube-api-access-l8bxr\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.846129 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.846334 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-serving-cert\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.847907 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed024e5d-8fc2-4c22-803d-73f3c9795f19-kube-api-access\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.848260 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.848264 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-client\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.849032 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-etcd-serving-ca\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.849427 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/120b38dc-8236-4fa6-a452-642b8ad738ee-images\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.850151 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-trusted-ca-bundle\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.842118 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-config\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.850253 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zrh\" (UniqueName: \"kubernetes.io/projected/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-kube-api-access-j7zrh\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.850318 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b54e8941-2fc4-432a-9e51-39684df9089e-trusted-ca\") pod \"cluster-image-registry-operator-7769bd8d7d-q5cvv\" (UID: \"b54e8941-2fc4-432a-9e51-39684df9089e\") " pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.839042 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-audit-policies\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.850469 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-error\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.851188 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc4r\" (UniqueName: \"kubernetes.io/projected/c085412c-b875-46c9-ae3e-e6b0d8067091-kube-api-access-tvc4r\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.851794 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e4a7de23-6134-4044-902a-0900dc04a501-signing-key\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.852109 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bacb25d-97b6-4491-8fb4-99feae1d802a-trusted-ca-bundle\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.853251 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-serving-cert\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.853361 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854046 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854079 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-client\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854086 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e4a7de23-6134-4044-902a-0900dc04a501-signing-cabundle\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854415 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-service-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854479 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854592 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f394926-bdb9-425c-b36e-264d7fd34550-config\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854627 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed024e5d-8fc2-4c22-803d-73f3c9795f19-config\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854645 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-srv-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.854860 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bacb25d-97b6-4491-8fb4-99feae1d802a-encryption-config\") pod \"apiserver-69c565c9b6-vbdpd\" (UID: \"5bacb25d-97b6-4491-8fb4-99feae1d802a\") " pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.855079 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-serving-cert\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.855139 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.855518 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/297ab9b6-2186-4d5b-a952-2bfd59af63c4-proxy-tls\") pod \"machine-config-controller-6df6df6b6b-58shh\" (UID: \"297ab9b6-2186-4d5b-a952-2bfd59af63c4\") " pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.855710 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtrv\" (UniqueName: \"kubernetes.io/projected/12e733dd-0939-4f1b-9cbb-13897e093787-kube-api-access-vvtrv\") pod \"csi-hostpathplugin-hvm8g\" (UID: \"12e733dd-0939-4f1b-9cbb-13897e093787\") " pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.855982 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kgvs\" (UniqueName: \"kubernetes.io/projected/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-kube-api-access-6kgvs\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.857554 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-trusted-ca-bundle\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858019 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-service-ca\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858316 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-config\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858494 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-config\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858564 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858650 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858737 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-service-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858929 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-kube-api-access\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858928 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-audit\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.858998 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed024e5d-8fc2-4c22-803d-73f3c9795f19-serving-cert\") pod \"kube-apiserver-operator-78d54458c4-sc8h7\" (UID: \"ed024e5d-8fc2-4c22-803d-73f3c9795f19\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.859062 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5025cb4-ddb0-4107-88c1-bcbcdb779ac0-webhook-certs\") pod \"multus-admission-controller-6c7c885997-4hbbc\" (UID: \"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0\") " pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.859127 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f394926-bdb9-425c-b36e-264d7fd34550-serving-cert\") pod \"openshift-controller-manager-operator-7978d7d7f6-2nt8z\" (UID: \"0f394926-bdb9-425c-b36e-264d7fd34550\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.859321 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-etcd-serving-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.859486 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"redhat-marketplace-8s8pc\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.860067 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d51f445-054a-4e4f-a67b-a828f5a32511-trusted-ca\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.860248 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13045510-8717-4a71-ade4-be95a76440a7-config-volume\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.860507 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71af81a9-7d43-49b2-9287-c375900aa905-config\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.861122 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5c38ff-1fa8-4219-994d-15776acd4a4d-serving-cert\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.861312 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-images\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.861362 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf09b15-4bb1-44bf-9d54-e76fad5cf76e-config\") pod \"authentication-operator-7cc7ff75d5-g9qv8\" (UID: \"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e\") " pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.861519 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-etcd-ca\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.862245 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd8h\" (UniqueName: \"kubernetes.io/projected/8a5ae51d-d173-4531-8975-f164c975ce1f-kube-api-access-wrd8h\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.862486 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"certified-operators-7287f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.862683 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-config\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.862990 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz92\" (UniqueName: \"kubernetes.io/projected/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-kube-api-access-2nz92\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.863264 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-config\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.863780 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-cliconfig\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.864758 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.865638 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.865877 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hpxx\" (UniqueName: \"kubernetes.io/projected/41e8708a-e40d-4d28-846b-c52eda4d1755-kube-api-access-8hpxx\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.867196 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/41e8708a-e40d-4d28-846b-c52eda4d1755-encryption-config\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.867295 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5c38ff-1fa8-4219-994d-15776acd4a4d-config\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.868392 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcvj\" (UniqueName: \"kubernetes.io/projected/530553aa-0a1d-423e-8a22-f5eb4bdbb883-kube-api-access-8dcvj\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.868742 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.869647 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a5ae51d-d173-4531-8975-f164c975ce1f-profile-collector-cert\") pod \"catalog-operator-857456c46-7f5wf\" (UID: \"8a5ae51d-d173-4531-8975-f164c975ce1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.868801 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/120b38dc-8236-4fa6-a452-642b8ad738ee-proxy-tls\") pod \"machine-config-operator-76788bff89-wkjgm\" (UID: \"120b38dc-8236-4fa6-a452-642b8ad738ee\") " pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.869005 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-machine-api-operator-tls\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.869140 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a702c6d2-4dde-4077-ab8c-0f8df804bf7a-metrics-certs\") pod \"network-metrics-daemon-qdfr4\" (UID: \"a702c6d2-4dde-4077-ab8c-0f8df804bf7a\") " pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.869157 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-samples-operator-tls\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.869554 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.870521 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13045510-8717-4a71-ade4-be95a76440a7-metrics-tls\") pod \"dns-default-gbw49\" (UID: \"13045510-8717-4a71-ade4-be95a76440a7\") " pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.870566 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-serving-cert\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.870590 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/530553aa-0a1d-423e-8a22-f5eb4bdbb883-serving-cert\") pod \"openshift-config-operator-77658b5b66-dq5sc\" (UID: \"530553aa-0a1d-423e-8a22-f5eb4bdbb883\") " pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.870571 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7-serving-cert\") pod \"kube-storage-version-migrator-operator-686c6c748c-qbnnr\" (UID: \"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.870596 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-router-certs\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.870659 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-serving-cert\" (UniqueName: \"kubernetes.io/secret/59748b9b-c309-4712-aa85-bb38d71c4915-webhook-serving-cert\") pod \"console-conversion-webhook-595f9969b-l6z49\" (UID: \"59748b9b-c309-4712-aa85-bb38d71c4915\") " pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.870803 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmhq\" (UniqueName: \"kubernetes.io/projected/cf1a8966-f594-490a-9fbb-eec5bafd13d3-kube-api-access-hqmhq\") pod \"migrator-f7c6d88df-q2fnv\" (UID: \"cf1a8966-f594-490a-9fbb-eec5bafd13d3\") " pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.871117 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxcp\" (UniqueName: \"kubernetes.io/projected/d0f40333-c860-4c04-8058-a0bf572dcf12-kube-api-access-qcxcp\") pod \"network-check-source-5c5478f8c-vqvt7\" (UID: \"d0f40333-c860-4c04-8058-a0bf572dcf12\") " pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.871143 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/4f8aa612-9da0-4a2b-911e-6a1764a4e74e-kube-api-access-55f7t\") pod \"machine-api-operator-788b7c6b6c-ctdmb\" (UID: \"4f8aa612-9da0-4a2b-911e-6a1764a4e74e\") " pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.871163 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5cx\" (UniqueName: \"kubernetes.io/projected/0b5d722a-1123-4935-9740-52a08d018bc9-kube-api-access-dt5cx\") pod \"ingress-canary-2vhcn\" (UID: \"0b5d722a-1123-4935-9740-52a08d018bc9\") " pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.871366 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-session\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.871365 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/41e8708a-e40d-4d28-846b-c52eda4d1755-image-import-ca\") pod \"apiserver-7fc54b8dd7-d2bhp\" (UID: \"41e8708a-e40d-4d28-846b-c52eda4d1755\") " pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.871505 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.871936 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"controller-manager-778975cc4f-x5vcf\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.871955 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggjm\" (UniqueName: \"kubernetes.io/projected/01feb2e0-a0f4-4573-8335-34e364e0ef40-kube-api-access-7ggjm\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.872493 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.872629 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-apiservice-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.872683 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c085412c-b875-46c9-ae3e-e6b0d8067091-profile-collector-cert\") pod \"olm-operator-6d8474f75f-x54mh\" (UID: \"c085412c-b875-46c9-ae3e-e6b0d8067091\") " pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.872987 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71af81a9-7d43-49b2-9287-c375900aa905-serving-cert\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.872999 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.873088 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"marketplace-operator-8b455464d-f9xdt\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.873347 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1-console-oauth-config\") pod \"console-644bb77b49-5x5xk\" (UID: \"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1\") " pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.873513 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlhw\" (UniqueName: \"kubernetes.io/projected/bd556935-a077-45df-ba3f-d42c39326ccd-kube-api-access-hjlhw\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.873542 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qdfr4" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.873908 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/01feb2e0-a0f4-4573-8335-34e364e0ef40-v4-0-config-user-template-login\") pod \"oauth-openshift-74fc7c67cc-xqf8b\" (UID: \"01feb2e0-a0f4-4573-8335-34e364e0ef40\") " pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.874068 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-serving-cert\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.874287 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1620f19-8aa3-45cf-931b-7ae0e5cd14cf-serving-cert\") pod \"kube-controller-manager-operator-6f6cb54958-rbddb\" (UID: \"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.874669 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd556935-a077-45df-ba3f-d42c39326ccd-webhook-cert\") pod \"packageserver-8464bcc55b-sjnqz\" (UID: \"bd556935-a077-45df-ba3f-d42c39326ccd\") " pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.875448 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.881508 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.881961 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51f445-054a-4e4f-a67b-a828f5a32511-metrics-tls\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.884381 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.884886 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-serving-cert\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.891936 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.897715 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.904950 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.917197 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.925049 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.943985 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944056 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944082 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944112 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944140 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944189 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944212 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944237 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944259 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944286 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.944313 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.946642 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.950961 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"redhat-operators-f4jkp\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.951877 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zpsk\" (UniqueName: \"kubernetes.io/projected/6268b7fe-8910-4505-b404-6f1df638105c-kube-api-access-2zpsk\") pod \"downloads-65476884b9-9wcvx\" (UID: \"6268b7fe-8910-4505-b404-6f1df638105c\") " pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.954092 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"route-controller-manager-776b8b7477-sfpvs\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.954115 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.954343 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71af81a9-7d43-49b2-9287-c375900aa905-kube-api-access\") pod \"openshift-kube-scheduler-operator-5d9b995f6b-fcgd7\" (UID: \"71af81a9-7d43-49b2-9287-c375900aa905\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.954468 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-76gl8\" (UniqueName: \"kubernetes.io/projected/34a48baf-1bee-4921-8bb2-9b7320e76f79-kube-api-access-76gl8\") pod \"network-check-target-v54bt\" (UID: \"34a48baf-1bee-4921-8bb2-9b7320e76f79\") " pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.956058 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.957641 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rpl7\" (UniqueName: \"kubernetes.io/projected/e9127708-ccfd-4891-8a3a-f0cacb77e0f4-kube-api-access-5rpl7\") pod \"console-operator-5dbbc74dc9-cp5cd\" (UID: \"e9127708-ccfd-4891-8a3a-f0cacb77e0f4\") " pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.965999 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.978173 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm986\" (UniqueName: \"kubernetes.io/projected/45a8038e-e7f2-4d93-a6f5-7753aa54e63f-kube-api-access-bm986\") pod \"control-plane-machine-set-operator-649bd778b4-tt5tw\" (UID: \"45a8038e-e7f2-4d93-a6f5-7753aa54e63f\") " pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.979033 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vhj\" (UniqueName: \"kubernetes.io/projected/6d67253e-2acd-4bc1-8185-793587da4f17-kube-api-access-d9vhj\") pod \"service-ca-operator-546b4f8984-pwccz\" (UID: \"6d67253e-2acd-4bc1-8185-793587da4f17\") " pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.979465 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-js87r\" (UniqueName: \"kubernetes.io/projected/e4a7de23-6134-4044-902a-0900dc04a501-kube-api-access-js87r\") pod \"service-ca-666f99b6f-kk8kg\" (UID: \"e4a7de23-6134-4044-902a-0900dc04a501\") " pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.980061 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl5kg\" (UniqueName: \"kubernetes.io/projected/7d51f445-054a-4e4f-a67b-a828f5a32511-kube-api-access-tl5kg\") pod \"ingress-operator-7d46d5bb6d-rrg6t\" (UID: \"7d51f445-054a-4e4f-a67b-a828f5a32511\") " pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.980685 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4t2\" (UniqueName: \"kubernetes.io/projected/10603adc-d495-423c-9459-4caa405960bb-kube-api-access-nf4t2\") pod \"dns-operator-75f687757b-nz2xb\" (UID: \"10603adc-d495-423c-9459-4caa405960bb\") " pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.983781 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:48 crc kubenswrapper[3556]: I1128 00:14:48.991196 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.027036 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.037844 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.045572 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.045643 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.046518 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.049537 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"community-operators-8jhz6\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.052360 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.053145 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2h9\" (UniqueName: \"kubernetes.io/projected/43ae1c37-047b-4ee2-9fee-41e337dd4ac8-kube-api-access-lx2h9\") pod \"openshift-apiserver-operator-7c88c4c865-kn67m\" (UID: \"43ae1c37-047b-4ee2-9fee-41e337dd4ac8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.061737 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.067156 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.074491 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.089349 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.097278 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.102611 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.109581 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.116450 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.123268 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.130406 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2vhcn" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.144553 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.146663 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.147048 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.147240 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.147268 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.147296 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.151271 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"community-operators-sdddl\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.152127 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp86\" (UniqueName: \"kubernetes.io/projected/f728c15e-d8de-4a9a-a3ea-fdcead95cb91-kube-api-access-6kp86\") pod \"cluster-samples-operator-bc474d5d6-wshwg\" (UID: \"f728c15e-d8de-4a9a-a3ea-fdcead95cb91\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.154375 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.155219 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9724w\" (UniqueName: \"kubernetes.io/projected/0b5c38ff-1fa8-4219-994d-15776acd4a4d-kube-api-access-9724w\") pod \"etcd-operator-768d5b5d86-722mg\" (UID: \"0b5c38ff-1fa8-4219-994d-15776acd4a4d\") " pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.155233 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5d97\" (UniqueName: \"kubernetes.io/projected/63eb7413-02c3-4d6e-bb48-e5ffe5ce15be-kube-api-access-x5d97\") pod \"package-server-manager-84d578d794-jw7r2\" (UID: \"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be\") " pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.155316 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.164162 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.167372 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.168228 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.190616 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.205364 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.212219 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.219931 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.227450 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.233451 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.234651 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.240780 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.301865 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.465487 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.751839 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"d44d716815e0e63a211256347ad7e947db2e0891b9e7430b7cbd77b00b844f5c"} Nov 28 00:14:49 crc kubenswrapper[3556]: I1128 00:14:49.753703 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"5869124eeb5a5433262b88a7f4058c60f983572a971239a8ff696142460f5cd1"} Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.512205 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f394926_bdb9_425c_b36e_264d7fd34550.slice/crio-f1608043ae8b98d7063eb496b22d6cfd892149914d0d6cca192105788a67d513 WatchSource:0}: Error finding container f1608043ae8b98d7063eb496b22d6cfd892149914d0d6cca192105788a67d513: Status 404 returned error can't find the container with id f1608043ae8b98d7063eb496b22d6cfd892149914d0d6cca192105788a67d513 Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.551637 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1a8966_f594_490a_9fbb_eec5bafd13d3.slice/crio-34cd35771e6aa783a0e8fae9ce4d70273d17e269ac5022107092df8d33293d65 WatchSource:0}: Error finding container 34cd35771e6aa783a0e8fae9ce4d70273d17e269ac5022107092df8d33293d65: Status 404 returned error can't find the container with id 34cd35771e6aa783a0e8fae9ce4d70273d17e269ac5022107092df8d33293d65 Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.671347 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a3e81c3_c292_4130_9436_f94062c91efd.slice/crio-a52910f67f00fc78e8c4ae0721ed574e2efcc1389eb2b85421bfae10cfa956ff WatchSource:0}: Error finding container a52910f67f00fc78e8c4ae0721ed574e2efcc1389eb2b85421bfae10cfa956ff: Status 404 returned error can't find the container with id a52910f67f00fc78e8c4ae0721ed574e2efcc1389eb2b85421bfae10cfa956ff Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.672125 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1620f19_8aa3_45cf_931b_7ae0e5cd14cf.slice/crio-e5d906887271341d86b37d2674c4f0deaef016d9800a11b2229351c1a81009f7 WatchSource:0}: Error finding container e5d906887271341d86b37d2674c4f0deaef016d9800a11b2229351c1a81009f7: Status 404 returned error can't find the container with id e5d906887271341d86b37d2674c4f0deaef016d9800a11b2229351c1a81009f7 Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.677214 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae0dfbb_a0a9_45bb_85b5_cd9f94f64fe7.slice/crio-9a9e57fdc52134f06e0dda2bb509a4d941022265de376e4760f285ec5d19cbc1 WatchSource:0}: Error finding container 9a9e57fdc52134f06e0dda2bb509a4d941022265de376e4760f285ec5d19cbc1: Status 404 returned error can't find the container with id 9a9e57fdc52134f06e0dda2bb509a4d941022265de376e4760f285ec5d19cbc1 Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.681301 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod297ab9b6_2186_4d5b_a952_2bfd59af63c4.slice/crio-a7645adf053713ca6050af86f65ed12ce3cdde02da8dbcfd8f986c3320452bf5 WatchSource:0}: Error finding container a7645adf053713ca6050af86f65ed12ce3cdde02da8dbcfd8f986c3320452bf5: Status 404 returned error can't find the container with id a7645adf053713ca6050af86f65ed12ce3cdde02da8dbcfd8f986c3320452bf5 Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.685395 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71af81a9_7d43_49b2_9287_c375900aa905.slice/crio-352423bc9109ca35a6dcc50f89a4ae2c163ff4583ee8f16947a0d52da56d0a9a WatchSource:0}: Error finding container 352423bc9109ca35a6dcc50f89a4ae2c163ff4583ee8f16947a0d52da56d0a9a: Status 404 returned error can't find the container with id 352423bc9109ca35a6dcc50f89a4ae2c163ff4583ee8f16947a0d52da56d0a9a Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.704595 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59748b9b_c309_4712_aa85_bb38d71c4915.slice/crio-3bd8408519da66a8d412a4c5b01a819414b36d63d4e57017639767aa584de3e0 WatchSource:0}: Error finding container 3bd8408519da66a8d412a4c5b01a819414b36d63d4e57017639767aa584de3e0: Status 404 returned error can't find the container with id 3bd8408519da66a8d412a4c5b01a819414b36d63d4e57017639767aa584de3e0 Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.706052 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3482be94_0cdb_4e2a_889b_e5fac59fdbf5.slice/crio-eea601870eb6dbafffccd3eb3b1ead953886969a6f1fe9e21f3b501a15d89bb4 WatchSource:0}: Error finding container eea601870eb6dbafffccd3eb3b1ead953886969a6f1fe9e21f3b501a15d89bb4: Status 404 returned error can't find the container with id eea601870eb6dbafffccd3eb3b1ead953886969a6f1fe9e21f3b501a15d89bb4 Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.769715 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"3bd8408519da66a8d412a4c5b01a819414b36d63d4e57017639767aa584de3e0"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.773700 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"9a9e57fdc52134f06e0dda2bb509a4d941022265de376e4760f285ec5d19cbc1"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.775535 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"8d0eafe119be39289ff1fc1386175bf60b72cad7de2c70279ef31ef18873f603"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.776934 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"cdd965d7e8fc3a68dcea7e702c60a4aac02294f507252bd0abd6afe7a727a106"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.781219 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"34cd35771e6aa783a0e8fae9ce4d70273d17e269ac5022107092df8d33293d65"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.782873 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"a7645adf053713ca6050af86f65ed12ce3cdde02da8dbcfd8f986c3320452bf5"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.789031 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"19df70af114a22a2e4f781c3e6b786a3deea68547ebf97fc2244ab3d01397145"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.794515 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"0dd3b22fb6fbaa1afddb3f961953e453718d4171074dd765e74391ae23e20ae8"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.802392 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"f1608043ae8b98d7063eb496b22d6cfd892149914d0d6cca192105788a67d513"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.803507 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"962683f9f83d5649a1b1c1c6b2d5421fcfb889d819c12cebcafef5b4038dc0ee"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.819138 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"a52910f67f00fc78e8c4ae0721ed574e2efcc1389eb2b85421bfae10cfa956ff"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.824735 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"722a3f00159cb6f45feac615bd18999d7a9a377e2c7edbca614d1836a89f9943"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.831361 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"eea601870eb6dbafffccd3eb3b1ead953886969a6f1fe9e21f3b501a15d89bb4"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.833476 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"bd729bb789a8deb3d6291c12d7cf7f7566700cb2db63bfa45b259cab685bcef9"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.833510 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-76788bff89-wkjgm" event={"ID":"120b38dc-8236-4fa6-a452-642b8ad738ee","Type":"ContainerStarted","Data":"344b73ae7fbf186e47f0d78dd12a9e238345ce7ace4a1a51f96473ff20e39c1a"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.834396 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"3a30c31665ef18d4c8c251c4ce572ff517213df347604c3ef1e467be6bbf25b8"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.835481 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"e5d906887271341d86b37d2674c4f0deaef016d9800a11b2229351c1a81009f7"} Nov 28 00:14:50 crc kubenswrapper[3556]: I1128 00:14:50.836242 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"352423bc9109ca35a6dcc50f89a4ae2c163ff4583ee8f16947a0d52da56d0a9a"} Nov 28 00:14:50 crc kubenswrapper[3556]: W1128 00:14:50.873156 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod530553aa_0a1d_423e_8a22_f5eb4bdbb883.slice/crio-c3a2be8cdbb56bc69b4920c4371dbe8db8adc89d44e77e6c34bb22815548b757 WatchSource:0}: Error finding container c3a2be8cdbb56bc69b4920c4371dbe8db8adc89d44e77e6c34bb22815548b757: Status 404 returned error can't find the container with id c3a2be8cdbb56bc69b4920c4371dbe8db8adc89d44e77e6c34bb22815548b757 Nov 28 00:14:51 crc kubenswrapper[3556]: W1128 00:14:51.060642 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a8038e_e7f2_4d93_a6f5_7753aa54e63f.slice/crio-e4d112aa968cc9f84eb8f3936806ca2ebedacd7325f2b1ad1540f3e23a310d2c WatchSource:0}: Error finding container e4d112aa968cc9f84eb8f3936806ca2ebedacd7325f2b1ad1540f3e23a310d2c: Status 404 returned error can't find the container with id e4d112aa968cc9f84eb8f3936806ca2ebedacd7325f2b1ad1540f3e23a310d2c Nov 28 00:14:51 crc kubenswrapper[3556]: W1128 00:14:51.153881 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d67253e_2acd_4bc1_8185_793587da4f17.slice/crio-03b4a75c0fcd53a7419f1fd69b54ac6667196fd95501149b35027affac61daf9 WatchSource:0}: Error finding container 03b4a75c0fcd53a7419f1fd69b54ac6667196fd95501149b35027affac61daf9: Status 404 returned error can't find the container with id 03b4a75c0fcd53a7419f1fd69b54ac6667196fd95501149b35027affac61daf9 Nov 28 00:14:51 crc kubenswrapper[3556]: W1128 00:14:51.188033 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41e8708a_e40d_4d28_846b_c52eda4d1755.slice/crio-815f578a0731cf3b0c6a3cae665788cf228229aae50050b682ca6a9ea876127c WatchSource:0}: Error finding container 815f578a0731cf3b0c6a3cae665788cf228229aae50050b682ca6a9ea876127c: Status 404 returned error can't find the container with id 815f578a0731cf3b0c6a3cae665788cf228229aae50050b682ca6a9ea876127c Nov 28 00:14:51 crc kubenswrapper[3556]: W1128 00:14:51.609470 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4dca86_e6ee_4ec9_8324_86aff960225e.slice/crio-33192eb70edfcf11231ecd532784f6f4f40b7e675b25362168b4ee9286c72af7 WatchSource:0}: Error finding container 33192eb70edfcf11231ecd532784f6f4f40b7e675b25362168b4ee9286c72af7: Status 404 returned error can't find the container with id 33192eb70edfcf11231ecd532784f6f4f40b7e675b25362168b4ee9286c72af7 Nov 28 00:14:51 crc kubenswrapper[3556]: I1128 00:14:51.870803 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerStarted","Data":"431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776"} Nov 28 00:14:51 crc kubenswrapper[3556]: I1128 00:14:51.872282 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:51 crc kubenswrapper[3556]: I1128 00:14:51.887938 3556 patch_prober.go:28] interesting pod/controller-manager-778975cc4f-x5vcf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Nov 28 00:14:51 crc kubenswrapper[3556]: I1128 00:14:51.888022 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Nov 28 00:14:51 crc kubenswrapper[3556]: W1128 00:14:51.895575 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4a7de23_6134_4044_902a_0900dc04a501.slice/crio-23e43bc170685bcd2cbe67cbacd8a1c9f13ea95bdc56dad0bfd0700dc1a3ba12 WatchSource:0}: Error finding container 23e43bc170685bcd2cbe67cbacd8a1c9f13ea95bdc56dad0bfd0700dc1a3ba12: Status 404 returned error can't find the container with id 23e43bc170685bcd2cbe67cbacd8a1c9f13ea95bdc56dad0bfd0700dc1a3ba12 Nov 28 00:14:51 crc kubenswrapper[3556]: I1128 00:14:51.904777 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" event={"ID":"8a5ae51d-d173-4531-8975-f164c975ce1f","Type":"ContainerStarted","Data":"ff7f2bc3eeadf9175ec150e1218e82d4e1a58fa544c801c6061fe8e9c4f8fc6e"} Nov 28 00:14:51 crc kubenswrapper[3556]: W1128 00:14:51.908906 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13045510_8717_4a71_ade4_be95a76440a7.slice/crio-38737addc17fe2fb8916ef9a005680142ed874285b098c929d5c23d94209be8b WatchSource:0}: Error finding container 38737addc17fe2fb8916ef9a005680142ed874285b098c929d5c23d94209be8b: Status 404 returned error can't find the container with id 38737addc17fe2fb8916ef9a005680142ed874285b098c929d5c23d94209be8b Nov 28 00:14:51 crc kubenswrapper[3556]: I1128 00:14:51.947524 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5c5478f8c-vqvt7" event={"ID":"d0f40333-c860-4c04-8058-a0bf572dcf12","Type":"ContainerStarted","Data":"bf01a61fdbbaacf8ff779afae7fe0a220195b9d0921e20d7e8237c5e54e45506"} Nov 28 00:14:51 crc kubenswrapper[3556]: W1128 00:14:51.948816 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12e733dd_0939_4f1b_9cbb_13897e093787.slice/crio-6fb9aed2a2ab798b9c9d7aa3d9a50eb3f40ca6e7bafdd71fc9ca19247fdf87ea WatchSource:0}: Error finding container 6fb9aed2a2ab798b9c9d7aa3d9a50eb3f40ca6e7bafdd71fc9ca19247fdf87ea: Status 404 returned error can't find the container with id 6fb9aed2a2ab798b9c9d7aa3d9a50eb3f40ca6e7bafdd71fc9ca19247fdf87ea Nov 28 00:14:51 crc kubenswrapper[3556]: I1128 00:14:51.981141 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"57d2f7f0302a76717c284730ef31335ab49352fd84f63b84f84696be445f64ab"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.022945 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"39895f00516f2488b4f7feee1e444186be1f9d7dcdb9a45655fe7f8647aff220"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.022987 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-644bb77b49-5x5xk" event={"ID":"9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1","Type":"ContainerStarted","Data":"85b54a3fd1060b0b1a2523cca7972c3cf58acac0b6ebbfe33f2109d9438d66dd"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.037704 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-686c6c748c-qbnnr" event={"ID":"9ae0dfbb-a0a9-45bb-85b5-cd9f94f64fe7","Type":"ContainerStarted","Data":"ae57164ccb47c261448c4a23722b26637ba5a6a23ef10a7e1fb5c5d56cf162d8"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.041800 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"03b4a75c0fcd53a7419f1fd69b54ac6667196fd95501149b35027affac61daf9"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.049986 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6c7c885997-4hbbc" event={"ID":"d5025cb4-ddb0-4107-88c1-bcbcdb779ac0","Type":"ContainerStarted","Data":"342dee354903cfdfc303d5d75373583a68a593ac952757c7fd885a60cd90b7a5"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.066322 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"815f578a0731cf3b0c6a3cae665788cf228229aae50050b682ca6a9ea876127c"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.074590 3556 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="efb15e1bc3010a2eaba8de1d315ede99d3b07b6163cd4e47c5d1f9c95bc5e6ec" exitCode=0 Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.074658 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"efb15e1bc3010a2eaba8de1d315ede99d3b07b6163cd4e47c5d1f9c95bc5e6ec"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.076785 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"4b2325571d13dea58205ede882f078c01cc1e9d29d2eaabcc1e4996d0c8d0ba8"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.077735 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"9e10b30a83bf01d5dff37b157d5c67c63136dd175f4be4bf293653b059cdf858"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.079715 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" event={"ID":"01feb2e0-a0f4-4573-8335-34e364e0ef40","Type":"ContainerStarted","Data":"334550eb3fe49bda77421de37de27d487f45ccf376557bd06837d3d56fc79ecc"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.082856 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"d778edb4754503df17e663ebff827a083e466b62a00aff1f9eccfbb6b9beb06b"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.084376 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7978d7d7f6-2nt8z" event={"ID":"0f394926-bdb9-425c-b36e-264d7fd34550","Type":"ContainerStarted","Data":"cc64e68245ebafa75d5bd8ac8234ae3cbbcd547861035a64fdb9f7f520f38d99"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.101873 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-f7c6d88df-q2fnv" event={"ID":"cf1a8966-f594-490a-9fbb-eec5bafd13d3","Type":"ContainerStarted","Data":"47d24b037ce51f79a438e08e93fad56c91f324cca5e9fdce8494c9dea96ac095"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.104476 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"e6910368a04bcdd959f096942e9fd6ab216277e30bd6e7161dd790fca5743caf"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.105477 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"3bd726061ae1b87c232ae9ce371120c86f41320648ccfaa8a3d1cdd459309eae"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.111654 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"33192eb70edfcf11231ecd532784f6f4f40b7e675b25362168b4ee9286c72af7"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.117377 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"c3a2be8cdbb56bc69b4920c4371dbe8db8adc89d44e77e6c34bb22815548b757"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.119146 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"195f49fb02a42f994b8b3d334b8850f88e01f7d8d80d639b4762715c8f0289e5"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.122042 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"e4fbc35641c5964747cc69bd4cd18754a12fab0c6a5fdac87c2f7f2bbb4c2846"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.290953 3556 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.291310 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"b3304b7a9c892f101d92624e3e719ac2a3f7f754009aa203f0e031b6cf57f115"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.297358 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-6f6cb54958-rbddb" event={"ID":"c1620f19-8aa3-45cf-931b-7ae0e5cd14cf","Type":"ContainerStarted","Data":"93b6de2fdb2b76a938524f136d330d58be5c5d2748fe12c99730e27467baf23d"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.300773 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"e4d112aa968cc9f84eb8f3936806ca2ebedacd7325f2b1ad1540f3e23a310d2c"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.308918 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"2e9c614ccbf81e4d97190e423f3341eb01bcb81d27f6b053a8f19fa8c9c4d378"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.317681 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"8d47d1aee1afff54392a6ee1709e62dbb51c2836725f7e95b73f0c71a3e8fec6"} Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.663862 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:14:52 crc kubenswrapper[3556]: I1128 00:14:52.664322 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.351871 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-649bd778b4-tt5tw" event={"ID":"45a8038e-e7f2-4d93-a6f5-7753aa54e63f","Type":"ContainerStarted","Data":"c02a18fcd4c701deed037c4ba35e32c41d07506cc93c9d6013b30f878a1cffcf"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.370518 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"8d22d2dc267ba2fd2822127f1fa70b1070fcab2ec4dc963ee09aa11b6a091bba"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.370558 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-65476884b9-9wcvx" event={"ID":"6268b7fe-8910-4505-b404-6f1df638105c","Type":"ContainerStarted","Data":"55036bc1d54ae3aaf403d614ccdc5f78b478dec9bec4e60d7d64b2881282e73a"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.372138 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.381198 3556 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.381285 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.393930 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"065c9c87192408d819a036c6c7041c7be48f4f04b6c761caac69659c27ced1d9"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.394414 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"153d6fa5dc95436ed026d80695d5d6c029d1367b8be85ff1e981e1743ebd4cda"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.452734 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" event={"ID":"e9127708-ccfd-4891-8a3a-f0cacb77e0f4","Type":"ContainerStarted","Data":"700417740fe276c0a156f16dd6609de82f02a2f0af2de3db81666da4e0961ea6"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.461452 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.482361 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"dad7c5bc6050b55b81f5d73285ff312bddfd8f5bbdd27bb277aae96ae9f7572d"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.482419 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"8cbfc6cf27207bcff49a4dbbba9e434c49d86d6b741f7152f73c8f7f1cd9d77e"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.515598 3556 generic.go:334] "Generic (PLEG): container finished" podID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerID="26a5b648b50cc92643660bce021edabbb9aa7fc9480bae29b43dce6866cf0d5e" exitCode=0 Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.515737 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerDied","Data":"26a5b648b50cc92643660bce021edabbb9aa7fc9480bae29b43dce6866cf0d5e"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.557929 3556 patch_prober.go:28] interesting pod/console-operator-5dbbc74dc9-cp5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.558034 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" podUID="e9127708-ccfd-4891-8a3a-f0cacb77e0f4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.62:8443/readyz\": dial tcp 10.217.0.62:8443: connect: connection refused" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.608359 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"cff75b3a9b1cf3c7451c215be42bb73789ae99ce6dde2ad52b65ab822c88dd5e"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.608410 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"38737addc17fe2fb8916ef9a005680142ed874285b098c929d5c23d94209be8b"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.642912 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" event={"ID":"bd556935-a077-45df-ba3f-d42c39326ccd","Type":"ContainerStarted","Data":"02d5c71793f07148e3375ddbc435e628b94c0dc2dbb364db4c89677c645183e9"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.644686 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.654698 3556 patch_prober.go:28] interesting pod/packageserver-8464bcc55b-sjnqz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.654793 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" podUID="bd556935-a077-45df-ba3f-d42c39326ccd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.668947 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-546b4f8984-pwccz" event={"ID":"6d67253e-2acd-4bc1-8185-793587da4f17","Type":"ContainerStarted","Data":"940f8e4b50c297d5e714d0548f345cd6e889997efd4d2b28fe348e22786549f2"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.723236 3556 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="c5b0f5e0b45a9b90754773eb847b69b94dfbc8067427503505055ed085186bc5" exitCode=0 Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.723334 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"c5b0f5e0b45a9b90754773eb847b69b94dfbc8067427503505055ed085186bc5"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.753713 3556 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="779855bb99a063294e4c710e0bad766ae6f8ac833a3636bc665bf1362c86850b" exitCode=0 Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.753828 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"779855bb99a063294e4c710e0bad766ae6f8ac833a3636bc665bf1362c86850b"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.779558 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"fca4b12177196bd4e8a345b499ea0babc3a3bb23fdb9e5bc4c49487b810afbc5"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.779613 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-78d54458c4-sc8h7" event={"ID":"ed024e5d-8fc2-4c22-803d-73f3c9795f19","Type":"ContainerStarted","Data":"bb2a02e7e18ee9f1d00bf872694cfd4a307da977d187a58c2335968006c7f51e"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.800248 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5d9b995f6b-fcgd7" event={"ID":"71af81a9-7d43-49b2-9287-c375900aa905","Type":"ContainerStarted","Data":"9a31fe2f327b6725a97baa42c5c4ff56540cf8170aef57b0c27de75fcbc4b820"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.848825 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-6df6df6b6b-58shh" event={"ID":"297ab9b6-2186-4d5b-a952-2bfd59af63c4","Type":"ContainerStarted","Data":"226d6b5cc501770d1e11f9461dc0bfc49dd468c770bc91fedd49290935526d6a"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.872183 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"8173764db0bf8fac4f55ef7ff9dcd4f5d5427515241a85d46739dbfd66145f32"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.875285 3556 generic.go:334] "Generic (PLEG): container finished" podID="5bacb25d-97b6-4491-8fb4-99feae1d802a" containerID="649be90519aac22adf79851438d780303df180d0ba86f37a85d4a6143f831011" exitCode=0 Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.875325 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerDied","Data":"649be90519aac22adf79851438d780303df180d0ba86f37a85d4a6143f831011"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.905146 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qdfr4" event={"ID":"a702c6d2-4dde-4077-ab8c-0f8df804bf7a","Type":"ContainerStarted","Data":"f45f5b4551bc349494f972448670a22faae2f8cadbc16cba82b6c5bb211306f6"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.936563 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" event={"ID":"c085412c-b875-46c9-ae3e-e6b0d8067091","Type":"ContainerStarted","Data":"68d0263d5ea53927b0c16ea1c19dca931674535674940effd238d77694e08658"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.937247 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.964373 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerStarted","Data":"8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.964413 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.988425 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"efc3a8efe38030af144a59427d2151ee4d7e8aea1e4e01071775e1d0a0c65076"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.988477 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7cc7ff75d5-g9qv8" event={"ID":"ebf09b15-4bb1-44bf-9d54-e76fad5cf76e","Type":"ContainerStarted","Data":"2faf11af73c9e2be651733dd6ee046787f3b3744b790730a87ef07c194678d5e"} Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.989121 3556 patch_prober.go:28] interesting pod/olm-operator-6d8474f75f-x54mh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 28 00:14:53 crc kubenswrapper[3556]: I1128 00:14:53.989173 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" podUID="c085412c-b875-46c9-ae3e-e6b0d8067091" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 28 00:14:54 crc kubenswrapper[3556]: I1128 00:14:54.011843 3556 patch_prober.go:28] interesting pod/route-controller-manager-776b8b7477-sfpvs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Nov 28 00:14:54 crc kubenswrapper[3556]: I1128 00:14:54.011932 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Nov 28 00:14:54 crc kubenswrapper[3556]: I1128 00:14:54.017996 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-7c88c4c865-kn67m" event={"ID":"43ae1c37-047b-4ee2-9fee-41e337dd4ac8","Type":"ContainerStarted","Data":"ac61c0940d51e3151eba586b2c090232d8c5e1918af220974ba563caadcff5c0"} Nov 28 00:14:54 crc kubenswrapper[3556]: I1128 00:14:54.996132 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:54 crc kubenswrapper[3556]: I1128 00:14:54.996174 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" event={"ID":"59748b9b-c309-4712-aa85-bb38d71c4915","Type":"ContainerStarted","Data":"c3d05e554efbf07aeead12e34bc2e0789968097b9000b3b6282463a8004ea5a1"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.001192 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"2b3528882f60b73a1bb3b37f44d014444897721c33e099069c61506495073f57"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.005715 3556 patch_prober.go:28] interesting pod/console-conversion-webhook-595f9969b-l6z49 container/conversion-webhook-server namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" start-of-body= Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.005787 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" podUID="59748b9b-c309-4712-aa85-bb38d71c4915" containerName="conversion-webhook-server" probeResult="failure" output="Get \"https://10.217.0.61:9443/readyz\": dial tcp 10.217.0.61:9443: connect: connection refused" Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.022248 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"6fb9aed2a2ab798b9c9d7aa3d9a50eb3f40ca6e7bafdd71fc9ca19247fdf87ea"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.036131 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"611739592d70dc36f23c0ff6f6210c500ea07d3f6adf1d5f8553415ebce4f89d"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.036177 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-v54bt" event={"ID":"34a48baf-1bee-4921-8bb2-9b7320e76f79","Type":"ContainerStarted","Data":"75cb373bc29e61dead709efc796e0bb0119071d66e828c00b8d49f8d87fd3b76"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.036711 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.043823 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"70064da5e10f7978c5e7bcc61510b2165ef1a009629ea13abb01705f9c0fff48"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.061526 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"f8037a46338b8257fb6a39c6b83fac4b50c7be3e87f77cc746bdf506cac8e6a3"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.069662 3556 generic.go:334] "Generic (PLEG): container finished" podID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerID="8546da780e65b99ced4131fab1e1e3e67e35bad93c5347655d1b973ae3048d5c" exitCode=0 Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.069727 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerDied","Data":"8546da780e65b99ced4131fab1e1e3e67e35bad93c5347655d1b973ae3048d5c"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.127498 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"23e43bc170685bcd2cbe67cbacd8a1c9f13ea95bdc56dad0bfd0700dc1a3ba12"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.135037 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"b7ba7bc19c729feb35432227cee80a6545afa81ab21a6d9a4be24135afc76ead"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.137672 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"e8db3396cd0b67ed159e50674d42f6c254acdd724625573bac45968cc66f2830"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.137711 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2vhcn" event={"ID":"0b5d722a-1123-4935-9740-52a08d018bc9","Type":"ContainerStarted","Data":"1f5bc1e2c3665f0d9a4fae1ee670a997331bc6268714f6119986733bd76aea26"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.142289 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"21b5011a009265c36f74b035721ee71dcca64dce00dcee67a1912391382bdc41"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.148228 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerStarted","Data":"0978fec8ebef5e4c6216203c48d5dcf678b35143ed63c9121fa323667d67e61f"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.149586 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.153703 3556 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="37664d7f50a3df46c30fdeaad2899d1d2d3a541ea3bb29fe926dcd31232c7787" exitCode=0 Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.153778 3556 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.153829 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.153996 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"37664d7f50a3df46c30fdeaad2899d1d2d3a541ea3bb29fe926dcd31232c7787"} Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.155808 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.176724 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-857456c46-7f5wf" Nov 28 00:14:55 crc kubenswrapper[3556]: I1128 00:14:55.184257 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.184420 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-788b7c6b6c-ctdmb" event={"ID":"4f8aa612-9da0-4a2b-911e-6a1764a4e74e","Type":"ContainerStarted","Data":"ac2c2fe241579807f542de1a93ec5a499d577171abb6ca2a172b9ea0b125d862"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.203768 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" event={"ID":"530553aa-0a1d-423e-8a22-f5eb4bdbb883","Type":"ContainerStarted","Data":"e2e8c15225ad33cd86b466156c75ab1cfda807a0592001cae361916e4475f2e9"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.240871 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-666f99b6f-kk8kg" event={"ID":"e4a7de23-6134-4044-902a-0900dc04a501","Type":"ContainerStarted","Data":"9ce9eaa2ec79072b26768dbc3b5d6e41fb320faa163bc67616d41e7bd16590fa"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.384878 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"443f4070728959938bbdbd90cc02c01d09c9bee0ba8b0278b41daee46af7639a"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.403565 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"745dc75b26ac55cf4eafd1f03de03a00f080850e45e7b1d411665d67be90a3d4"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.408629 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-7769bd8d7d-q5cvv" event={"ID":"b54e8941-2fc4-432a-9e51-39684df9089e","Type":"ContainerStarted","Data":"c4ebe71973e21279b3d08e37ddf3b630c8c342945595e6ef881600006a5ca467"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.420870 3556 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="e5c736d53be1471d1ab07cfd866314ffad6eb9cc5b747813b16ae47477d77f9c" exitCode=0 Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.420947 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"e5c736d53be1471d1ab07cfd866314ffad6eb9cc5b747813b16ae47477d77f9c"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.459612 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" event={"ID":"5bacb25d-97b6-4491-8fb4-99feae1d802a","Type":"ContainerStarted","Data":"9c461843118101b6d6b51eeb92851fa62cf6e7021aa0cab70cde38caa159d29f"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.489894 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" event={"ID":"63eb7413-02c3-4d6e-bb48-e5ffe5ce15be","Type":"ContainerStarted","Data":"7ecca66e0cb645384177b1b2663cf60a57cbb3acdd808e1b4e82e156566b2d1b"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.491109 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.502037 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"95db5e8c21e815e77164a279e31178d004e38b214344669106a12d790649e62e"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.533504 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-768d5b5d86-722mg" event={"ID":"0b5c38ff-1fa8-4219-994d-15776acd4a4d","Type":"ContainerStarted","Data":"539b5e7552c5581ce2c71ed4ed443d86ee3ab1198bf7761d7bec8bb05cbaa8fc"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.585469 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-75f687757b-nz2xb" event={"ID":"10603adc-d495-423c-9459-4caa405960bb","Type":"ContainerStarted","Data":"9c27421b071a0464dd7c7891dc355d0d7ceda6c26c2ede2ecc5993d0586a4f68"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.615882 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbw49" event={"ID":"13045510-8717-4a71-ade4-be95a76440a7","Type":"ContainerStarted","Data":"a5e6a97a89f758c655ec3a4b636ecd224806380803931faa0a4060d93cd46ddd"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.616968 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gbw49" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.635546 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"009f40c45b5621017066090d7741a05d81190d0dded0e717dc4802011a1e99b7"} Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.652047 3556 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.652134 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.652984 3556 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.653031 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.671191 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5dbbc74dc9-cp5cd" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.671251 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.680677 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6d8474f75f-x54mh" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.680753 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-conversion-webhook-595f9969b-l6z49" Nov 28 00:14:56 crc kubenswrapper[3556]: I1128 00:14:56.699113 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-8464bcc55b-sjnqz" Nov 28 00:14:57 crc kubenswrapper[3556]: I1128 00:14:57.667540 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" event={"ID":"41e8708a-e40d-4d28-846b-c52eda4d1755","Type":"ContainerStarted","Data":"86df63cc14ea4cb4b64a9ae57e80099dc928a49415c9b93d424968c1e64acb48"} Nov 28 00:14:57 crc kubenswrapper[3556]: I1128 00:14:57.676743 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-bc474d5d6-wshwg" event={"ID":"f728c15e-d8de-4a9a-a3ea-fdcead95cb91","Type":"ContainerStarted","Data":"54f04d9a6e477c0c173f850962ba050a982b66f16dcfa367f97cba7f952225da"} Nov 28 00:14:57 crc kubenswrapper[3556]: I1128 00:14:57.679514 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:57 crc kubenswrapper[3556]: I1128 00:14:57.684111 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.071112 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.172826 3556 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.384586 3556 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T00:14:58.172868093Z","Handler":null,"Name":""} Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.465848 3556 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.465932 3556 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.758562 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"3964ea347a423e38662eea57a29be570e0eeb081778305057625177e38baaf3d"} Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.955481 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.965367 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-74fc7c67cc-xqf8b" Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.989869 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:58 crc kubenswrapper[3556]: I1128 00:14:58.990798 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.008355 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.095810 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.095854 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.119607 3556 patch_prober.go:28] interesting pod/console-644bb77b49-5x5xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.119705 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.147561 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.147604 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.187859 3556 patch_prober.go:28] interesting pod/apiserver-7fc54b8dd7-d2bhp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]log ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]etcd ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/max-in-flight-filter ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 28 00:14:59 crc kubenswrapper[3556]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/project.openshift.io-projectcache ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/openshift.io-startinformers ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 28 00:14:59 crc kubenswrapper[3556]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 00:14:59 crc kubenswrapper[3556]: healthz check failed Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.188028 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" podUID="41e8708a-e40d-4d28-846b-c52eda4d1755" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.242982 3556 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.243107 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.243156 3556 patch_prober.go:28] interesting pod/downloads-65476884b9-9wcvx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.243310 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-65476884b9-9wcvx" podUID="6268b7fe-8910-4505-b404-6f1df638105c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.66:8080/\": dial tcp 10.217.0.66:8080: connect: connection refused" Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.826336 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"59d8fe04888756b01c8bfa7222784cc9c3cac7385b9e2a5e79db0d0ca3780533"} Nov 28 00:14:59 crc kubenswrapper[3556]: I1128 00:14:59.834822 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-69c565c9b6-vbdpd" Nov 28 00:15:04 crc kubenswrapper[3556]: I1128 00:15:04.153526 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:15:04 crc kubenswrapper[3556]: I1128 00:15:04.159311 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7fc54b8dd7-d2bhp" Nov 28 00:15:04 crc kubenswrapper[3556]: I1128 00:15:04.174024 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gbw49" Nov 28 00:15:09 crc kubenswrapper[3556]: I1128 00:15:09.090182 3556 patch_prober.go:28] interesting pod/console-644bb77b49-5x5xk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" start-of-body= Nov 28 00:15:09 crc kubenswrapper[3556]: I1128 00:15:09.090322 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": dial tcp 10.217.0.73:8443: connect: connection refused" Nov 28 00:15:09 crc kubenswrapper[3556]: I1128 00:15:09.260113 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-65476884b9-9wcvx" Nov 28 00:15:18 crc kubenswrapper[3556]: I1128 00:15:18.686856 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:15:18 crc kubenswrapper[3556]: I1128 00:15:18.687741 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:15:18 crc kubenswrapper[3556]: I1128 00:15:18.687784 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:15:18 crc kubenswrapper[3556]: I1128 00:15:18.687805 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:15:18 crc kubenswrapper[3556]: I1128 00:15:18.687830 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:15:19 crc kubenswrapper[3556]: I1128 00:15:19.094883 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:15:19 crc kubenswrapper[3556]: I1128 00:15:19.101705 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-644bb77b49-5x5xk" Nov 28 00:15:22 crc kubenswrapper[3556]: I1128 00:15:22.664724 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:15:22 crc kubenswrapper[3556]: I1128 00:15:22.665267 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.586944 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28fk8"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.587164 3556 topology_manager.go:215] "Topology Admit Handler" podUID="949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" podNamespace="openshift-marketplace" podName="redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.634641 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.638392 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4qq2b"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.638474 3556 topology_manager.go:215] "Topology Admit Handler" podUID="6ff4c74d-d051-42b5-b30b-75580e80299d" podNamespace="openshift-marketplace" podName="certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.641467 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.642126 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-catalog-content\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.642227 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9p87\" (UniqueName: \"kubernetes.io/projected/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-kube-api-access-m9p87\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.642254 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-utilities\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.644371 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.644523 3556 topology_manager.go:215] "Topology Admit Handler" podUID="b043bb6a-7727-4c8f-8fc4-64660e345ec4" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.645245 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.648071 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ztgxm"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.648123 3556 topology_manager.go:215] "Topology Admit Handler" podUID="636f4587-587c-4c55-8f7f-8722b05f3bf5" podNamespace="openshift-marketplace" podName="redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.649048 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.649092 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.649199 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.653430 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29404800-brn7x"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.654077 3556 topology_manager.go:215] "Topology Admit Handler" podUID="e3327d8e-10c1-403b-bad0-cfda7ae4295f" podNamespace="openshift-image-registry" podName="image-pruner-29404800-brn7x" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.655645 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.656944 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qq2b"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.658210 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-nzhll" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.658717 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.660202 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29404800-brn7x"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.665615 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.670904 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztgxm"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.675942 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28fk8"] Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.742951 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-m9p87\" (UniqueName: \"kubernetes.io/projected/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-kube-api-access-m9p87\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743046 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxw9\" (UniqueName: \"kubernetes.io/projected/b043bb6a-7727-4c8f-8fc4-64660e345ec4-kube-api-access-zsxw9\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743081 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-utilities\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743145 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vnr6\" (UniqueName: \"kubernetes.io/projected/e3327d8e-10c1-403b-bad0-cfda7ae4295f-kube-api-access-4vnr6\") pod \"image-pruner-29404800-brn7x\" (UID: \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\") " pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743177 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4r7g\" (UniqueName: \"kubernetes.io/projected/636f4587-587c-4c55-8f7f-8722b05f3bf5-kube-api-access-h4r7g\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743212 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b043bb6a-7727-4c8f-8fc4-64660e345ec4-config-volume\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743238 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-catalog-content\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743271 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-catalog-content\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743298 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b043bb6a-7727-4c8f-8fc4-64660e345ec4-secret-volume\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743335 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-utilities\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743365 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcgsg\" (UniqueName: \"kubernetes.io/projected/6ff4c74d-d051-42b5-b30b-75580e80299d-kube-api-access-qcgsg\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743397 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-utilities\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743435 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3327d8e-10c1-403b-bad0-cfda7ae4295f-serviceca\") pod \"image-pruner-29404800-brn7x\" (UID: \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\") " pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.743469 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-catalog-content\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.744066 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-catalog-content\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.744527 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-utilities\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.779340 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9p87\" (UniqueName: \"kubernetes.io/projected/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-kube-api-access-m9p87\") pod \"redhat-operators-28fk8\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845075 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-utilities\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845147 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3327d8e-10c1-403b-bad0-cfda7ae4295f-serviceca\") pod \"image-pruner-29404800-brn7x\" (UID: \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\") " pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845204 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zsxw9\" (UniqueName: \"kubernetes.io/projected/b043bb6a-7727-4c8f-8fc4-64660e345ec4-kube-api-access-zsxw9\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845234 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4vnr6\" (UniqueName: \"kubernetes.io/projected/e3327d8e-10c1-403b-bad0-cfda7ae4295f-kube-api-access-4vnr6\") pod \"image-pruner-29404800-brn7x\" (UID: \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\") " pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845262 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-h4r7g\" (UniqueName: \"kubernetes.io/projected/636f4587-587c-4c55-8f7f-8722b05f3bf5-kube-api-access-h4r7g\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845296 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b043bb6a-7727-4c8f-8fc4-64660e345ec4-config-volume\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845336 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-catalog-content\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845363 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-catalog-content\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845384 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b043bb6a-7727-4c8f-8fc4-64660e345ec4-secret-volume\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845411 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-utilities\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.845433 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qcgsg\" (UniqueName: \"kubernetes.io/projected/6ff4c74d-d051-42b5-b30b-75580e80299d-kube-api-access-qcgsg\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.846506 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-utilities\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.847035 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-catalog-content\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.847272 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-catalog-content\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.848759 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b043bb6a-7727-4c8f-8fc4-64660e345ec4-config-volume\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.850571 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3327d8e-10c1-403b-bad0-cfda7ae4295f-serviceca\") pod \"image-pruner-29404800-brn7x\" (UID: \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\") " pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.852137 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-utilities\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.853134 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b043bb6a-7727-4c8f-8fc4-64660e345ec4-secret-volume\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.871794 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4r7g\" (UniqueName: \"kubernetes.io/projected/636f4587-587c-4c55-8f7f-8722b05f3bf5-kube-api-access-h4r7g\") pod \"redhat-marketplace-ztgxm\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.876282 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vnr6\" (UniqueName: \"kubernetes.io/projected/e3327d8e-10c1-403b-bad0-cfda7ae4295f-kube-api-access-4vnr6\") pod \"image-pruner-29404800-brn7x\" (UID: \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\") " pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.879152 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcgsg\" (UniqueName: \"kubernetes.io/projected/6ff4c74d-d051-42b5-b30b-75580e80299d-kube-api-access-qcgsg\") pod \"certified-operators-4qq2b\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.880999 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsxw9\" (UniqueName: \"kubernetes.io/projected/b043bb6a-7727-4c8f-8fc4-64660e345ec4-kube-api-access-zsxw9\") pod \"collect-profiles-29404815-8hvvc\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.971895 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.982177 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:15:23 crc kubenswrapper[3556]: I1128 00:15:23.994745 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:15:24 crc kubenswrapper[3556]: I1128 00:15:24.001078 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:15:24 crc kubenswrapper[3556]: I1128 00:15:24.009958 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:15:29 crc kubenswrapper[3556]: I1128 00:15:29.197230 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-84d578d794-jw7r2" Nov 28 00:15:29 crc kubenswrapper[3556]: I1128 00:15:29.239884 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-v54bt" Nov 28 00:15:30 crc kubenswrapper[3556]: I1128 00:15:30.018853 3556 generic.go:334] "Generic (PLEG): container finished" podID="aa90b3c2-febd-4588-a063-7fbbe82f00c1" containerID="26ea99a990c8b29e8794df03ad0ad41b98f38cf49bbad1e53ff53371275f3629" exitCode=0 Nov 28 00:15:30 crc kubenswrapper[3556]: I1128 00:15:30.018897 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerDied","Data":"26ea99a990c8b29e8794df03ad0ad41b98f38cf49bbad1e53ff53371275f3629"} Nov 28 00:15:52 crc kubenswrapper[3556]: I1128 00:15:52.664612 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:15:52 crc kubenswrapper[3556]: I1128 00:15:52.665617 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:15:52 crc kubenswrapper[3556]: I1128 00:15:52.665677 3556 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:15:52 crc kubenswrapper[3556]: I1128 00:15:52.666853 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5825caecff59ec411acfa2888077a9dd43f86687eece88fb8f014b10c1a3740e"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 00:15:52 crc kubenswrapper[3556]: I1128 00:15:52.667055 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://5825caecff59ec411acfa2888077a9dd43f86687eece88fb8f014b10c1a3740e" gracePeriod=600 Nov 28 00:15:59 crc kubenswrapper[3556]: I1128 00:15:59.204882 3556 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="5825caecff59ec411acfa2888077a9dd43f86687eece88fb8f014b10c1a3740e" exitCode=0 Nov 28 00:15:59 crc kubenswrapper[3556]: I1128 00:15:59.204977 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"5825caecff59ec411acfa2888077a9dd43f86687eece88fb8f014b10c1a3740e"} Nov 28 00:16:14 crc kubenswrapper[3556]: I1128 00:16:14.829690 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc"] Nov 28 00:16:14 crc kubenswrapper[3556]: I1128 00:16:14.846171 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28fk8"] Nov 28 00:16:14 crc kubenswrapper[3556]: I1128 00:16:14.857223 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztgxm"] Nov 28 00:16:14 crc kubenswrapper[3556]: W1128 00:16:14.873864 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod636f4587_587c_4c55_8f7f_8722b05f3bf5.slice/crio-ddebb0e7d6e6fdcc0f5234eee5cdc66f50ab3e81de711dcb04811bbe03f5c863 WatchSource:0}: Error finding container ddebb0e7d6e6fdcc0f5234eee5cdc66f50ab3e81de711dcb04811bbe03f5c863: Status 404 returned error can't find the container with id ddebb0e7d6e6fdcc0f5234eee5cdc66f50ab3e81de711dcb04811bbe03f5c863 Nov 28 00:16:14 crc kubenswrapper[3556]: W1128 00:16:14.874745 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod949f9fd1_be5f_4542_ab7a_9a4ce2b3b8a2.slice/crio-387bb41983e1de16d2a9eb898095afeff2a100a3be5c13d7fe4a56fba89fac42 WatchSource:0}: Error finding container 387bb41983e1de16d2a9eb898095afeff2a100a3be5c13d7fe4a56fba89fac42: Status 404 returned error can't find the container with id 387bb41983e1de16d2a9eb898095afeff2a100a3be5c13d7fe4a56fba89fac42 Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.001099 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29404800-brn7x"] Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.007653 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qq2b"] Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.300798 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29404800-brn7x" event={"ID":"e3327d8e-10c1-403b-bad0-cfda7ae4295f","Type":"ContainerStarted","Data":"341f3616f1d67dfc39bb55ac54df6390cc286dd49bbb2517250b899c082435e2"} Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.303093 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" event={"ID":"aa90b3c2-febd-4588-a063-7fbbe82f00c1","Type":"ContainerStarted","Data":"723f6de0e4d4f8a4dd5e2f42d028d581ce8962c8f80632026e89f72a0802efd7"} Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.309055 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28fk8" event={"ID":"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2","Type":"ContainerStarted","Data":"387bb41983e1de16d2a9eb898095afeff2a100a3be5c13d7fe4a56fba89fac42"} Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.310144 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" event={"ID":"b043bb6a-7727-4c8f-8fc4-64660e345ec4","Type":"ContainerStarted","Data":"123a7f2a8f030c689f462e1dcc09a56cabe6b53967660afcb03c51ddc1557a0a"} Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.311767 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerStarted","Data":"42458ce58837741299b0c5d28c9c9740ea1b1bee450e6863b9f24e019fc70b49"} Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.312847 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztgxm" event={"ID":"636f4587-587c-4c55-8f7f-8722b05f3bf5","Type":"ContainerStarted","Data":"ddebb0e7d6e6fdcc0f5234eee5cdc66f50ab3e81de711dcb04811bbe03f5c863"} Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.314689 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"acafa606c4aa1bb9f7edfa1daf5c757ca7084d520498133fa4c1d1f00743db14"} Nov 28 00:16:15 crc kubenswrapper[3556]: I1128 00:16:15.315533 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qq2b" event={"ID":"6ff4c74d-d051-42b5-b30b-75580e80299d","Type":"ContainerStarted","Data":"ba9f991522ebdaabd44d80123c26b8b8ccfd29b8da37a0f1275044aba285022f"} Nov 28 00:16:16 crc kubenswrapper[3556]: I1128 00:16:16.262460 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:16:16 crc kubenswrapper[3556]: I1128 00:16:16.269334 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:16:16 crc kubenswrapper[3556]: I1128 00:16:16.364032 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"b80434ca98c3aefe56302e56b847d1b69f46aa28a5e758341619dac2c0dbd07e"} Nov 28 00:16:16 crc kubenswrapper[3556]: I1128 00:16:16.366669 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerStarted","Data":"0f19d2f1712f9450cd77baf638932f67c27d1f4518c975033e17a85c9f709684"} Nov 28 00:16:16 crc kubenswrapper[3556]: I1128 00:16:16.367336 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:16:16 crc kubenswrapper[3556]: I1128 00:16:16.370918 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5c9bf7bc58-6jctv" Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.375407 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hvm8g" event={"ID":"12e733dd-0939-4f1b-9cbb-13897e093787","Type":"ContainerStarted","Data":"1f1b1cba94a5b4dc86f6e34e402c0661c4f88ce62cd26bdfa1235a6cbf60e98d"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.378548 3556 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="b80434ca98c3aefe56302e56b847d1b69f46aa28a5e758341619dac2c0dbd07e" exitCode=0 Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.378602 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"b80434ca98c3aefe56302e56b847d1b69f46aa28a5e758341619dac2c0dbd07e"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.384803 3556 generic.go:334] "Generic (PLEG): container finished" podID="949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" containerID="f130c785cd1e052ccbf25fd3edb68446feb675331fa234d31a7119c93ad64a08" exitCode=0 Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.385035 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28fk8" event={"ID":"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2","Type":"ContainerDied","Data":"f130c785cd1e052ccbf25fd3edb68446feb675331fa234d31a7119c93ad64a08"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.387404 3556 generic.go:334] "Generic (PLEG): container finished" podID="636f4587-587c-4c55-8f7f-8722b05f3bf5" containerID="7693ae48dd7fe1baaf5a6aa91c7de341ac12854c533bb7ff4dd25a2c4f4a7fd0" exitCode=0 Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.387512 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztgxm" event={"ID":"636f4587-587c-4c55-8f7f-8722b05f3bf5","Type":"ContainerDied","Data":"7693ae48dd7fe1baaf5a6aa91c7de341ac12854c533bb7ff4dd25a2c4f4a7fd0"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.393096 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerStarted","Data":"ecda30d91c518d762f9d808359ce1f678ce75a5920e5e49591d12a906f39246a"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.400610 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29404800-brn7x" event={"ID":"e3327d8e-10c1-403b-bad0-cfda7ae4295f","Type":"ContainerStarted","Data":"f2d6eb38dc59ae8a2d96942c5f58267be5af80a5e842d62608f42f96c773e017"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.402516 3556 generic.go:334] "Generic (PLEG): container finished" podID="6ff4c74d-d051-42b5-b30b-75580e80299d" containerID="556b0ec51ac4baeb53677487eddbbf004c813e347e1a6e4839b10bb7417d1f5f" exitCode=0 Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.402623 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qq2b" event={"ID":"6ff4c74d-d051-42b5-b30b-75580e80299d","Type":"ContainerDied","Data":"556b0ec51ac4baeb53677487eddbbf004c813e347e1a6e4839b10bb7417d1f5f"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.404863 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerStarted","Data":"ec5ed9e559fe9e57bd17180538d0d430c9e1d35fe59ac0fddef4719a7fbbca91"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.406070 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" event={"ID":"b043bb6a-7727-4c8f-8fc4-64660e345ec4","Type":"ContainerStarted","Data":"8b9d5ed6ecaf5cb1189407632ad1331c49085c5972c198a86bc81f10ecc752c9"} Nov 28 00:16:17 crc kubenswrapper[3556]: I1128 00:16:17.436761 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" podStartSLOduration=77.436684828 podStartE2EDuration="1m17.436684828s" podCreationTimestamp="2025-11-28 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:16:17.432287057 +0000 UTC m=+239.024519047" watchObservedRunningTime="2025-11-28 00:16:17.436684828 +0000 UTC m=+239.028916848" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.529656 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qq2b"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.535345 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.535664 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7287f" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-content" containerID="cri-o://0f19d2f1712f9450cd77baf638932f67c27d1f4518c975033e17a85c9f709684" gracePeriod=30 Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.539138 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.551216 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.551456 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sdddl" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" containerID="cri-o://42458ce58837741299b0c5d28c9c9740ea1b1bee450e6863b9f24e019fc70b49" gracePeriod=30 Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.562864 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.563366 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" containerID="cri-o://0978fec8ebef5e4c6216203c48d5dcf678b35143ed63c9121fa323667d67e61f" gracePeriod=30 Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.576196 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.579047 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-xd2kb"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.579240 3556 topology_manager.go:215] "Topology Admit Handler" podUID="57b23f79-74b4-4ba9-bf50-aeaa322b31df" podNamespace="openshift-marketplace" podName="marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.580734 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.583430 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-b4zbk" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.586204 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztgxm"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.602145 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-xd2kb"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.607544 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28fk8"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.614249 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.672816 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b23f79-74b4-4ba9-bf50-aeaa322b31df-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.673204 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57b23f79-74b4-4ba9-bf50-aeaa322b31df-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.673407 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncgfq\" (UniqueName: \"kubernetes.io/projected/57b23f79-74b4-4ba9-bf50-aeaa322b31df-kube-api-access-ncgfq\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.688494 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.688589 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.688641 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.688661 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.688695 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.775128 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b23f79-74b4-4ba9-bf50-aeaa322b31df-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.775185 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57b23f79-74b4-4ba9-bf50-aeaa322b31df-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.775251 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ncgfq\" (UniqueName: \"kubernetes.io/projected/57b23f79-74b4-4ba9-bf50-aeaa322b31df-kube-api-access-ncgfq\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.777086 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57b23f79-74b4-4ba9-bf50-aeaa322b31df-marketplace-trusted-ca\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.782142 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57b23f79-74b4-4ba9-bf50-aeaa322b31df-marketplace-operator-metrics\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.791976 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncgfq\" (UniqueName: \"kubernetes.io/projected/57b23f79-74b4-4ba9-bf50-aeaa322b31df-kube-api-access-ncgfq\") pod \"marketplace-operator-8b455464d-xd2kb\" (UID: \"57b23f79-74b4-4ba9-bf50-aeaa322b31df\") " pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.916067 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-b4zbk" Nov 28 00:16:18 crc kubenswrapper[3556]: I1128 00:16:18.923405 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:19 crc kubenswrapper[3556]: I1128 00:16:19.076590 3556 patch_prober.go:28] interesting pod/marketplace-operator-8b455464d-f9xdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Nov 28 00:16:19 crc kubenswrapper[3556]: I1128 00:16:19.077263 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.13:8080/healthz\": dial tcp 10.217.0.13:8080: connect: connection refused" Nov 28 00:16:19 crc kubenswrapper[3556]: I1128 00:16:19.209748 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-xd2kb"] Nov 28 00:16:19 crc kubenswrapper[3556]: I1128 00:16:19.418681 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" event={"ID":"57b23f79-74b4-4ba9-bf50-aeaa322b31df","Type":"ContainerStarted","Data":"2721554b40252e5fdcf8686e923a69873ad7f3457a599f63ea46232091ea8984"} Nov 28 00:16:22 crc kubenswrapper[3556]: I1128 00:16:22.979985 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7287f_887d596e-c519-4bfa-af90-3edd9e1b2f0f/extract-content/1.log" Nov 28 00:16:22 crc kubenswrapper[3556]: I1128 00:16:22.981248 3556 generic.go:334] "Generic (PLEG): container finished" podID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerID="0f19d2f1712f9450cd77baf638932f67c27d1f4518c975033e17a85c9f709684" exitCode=2 Nov 28 00:16:22 crc kubenswrapper[3556]: I1128 00:16:22.981279 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"0f19d2f1712f9450cd77baf638932f67c27d1f4518c975033e17a85c9f709684"} Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.274120 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.407613 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.407835 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.407983 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") pod \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\" (UID: \"3482be94-0cdb-4e2a-889b-e5fac59fdbf5\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.408953 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.416729 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.418107 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg" (OuterVolumeSpecName: "kube-api-access-rg2zg") pod "3482be94-0cdb-4e2a-889b-e5fac59fdbf5" (UID: "3482be94-0cdb-4e2a-889b-e5fac59fdbf5"). InnerVolumeSpecName "kube-api-access-rg2zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.457572 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7287f_887d596e-c519-4bfa-af90-3edd9e1b2f0f/extract-content/1.log" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.457928 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.461471 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sdddl_fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/extract-content/1.log" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.461796 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.508792 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.508886 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.508936 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.508989 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.509102 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") pod \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\" (UID: \"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.509141 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") pod \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\" (UID: \"887d596e-c519-4bfa-af90-3edd9e1b2f0f\") " Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.509401 3556 reconciler_common.go:300] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.509426 3556 reconciler_common.go:300] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.509440 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rg2zg\" (UniqueName: \"kubernetes.io/projected/3482be94-0cdb-4e2a-889b-e5fac59fdbf5-kube-api-access-rg2zg\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.513930 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities" (OuterVolumeSpecName: "utilities") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.514232 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5" (OuterVolumeSpecName: "kube-api-access-ncrf5") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "kube-api-access-ncrf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.516228 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities" (OuterVolumeSpecName: "utilities") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.530098 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt" (OuterVolumeSpecName: "kube-api-access-9p8gt") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "kube-api-access-9p8gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.610398 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.610438 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.610455 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ncrf5\" (UniqueName: \"kubernetes.io/projected/887d596e-c519-4bfa-af90-3edd9e1b2f0f-kube-api-access-ncrf5\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.610469 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9p8gt\" (UniqueName: \"kubernetes.io/projected/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-kube-api-access-9p8gt\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.709097 3556 generic.go:334] "Generic (PLEG): container finished" podID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerID="0978fec8ebef5e4c6216203c48d5dcf678b35143ed63c9121fa323667d67e61f" exitCode=0 Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.709203 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"0978fec8ebef5e4c6216203c48d5dcf678b35143ed63c9121fa323667d67e61f"} Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.709257 3556 scope.go:117] "RemoveContainer" containerID="0978fec8ebef5e4c6216203c48d5dcf678b35143ed63c9121fa323667d67e61f" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.712154 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sdddl_fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/extract-content/1.log" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.712557 3556 generic.go:334] "Generic (PLEG): container finished" podID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerID="42458ce58837741299b0c5d28c9c9740ea1b1bee450e6863b9f24e019fc70b49" exitCode=2 Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.712601 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"42458ce58837741299b0c5d28c9c9740ea1b1bee450e6863b9f24e019fc70b49"} Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.712665 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sdddl" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.747694 3556 scope.go:117] "RemoveContainer" containerID="42458ce58837741299b0c5d28c9c9740ea1b1bee450e6863b9f24e019fc70b49" Nov 28 00:16:26 crc kubenswrapper[3556]: I1128 00:16:26.776232 3556 scope.go:117] "RemoveContainer" containerID="e5c736d53be1471d1ab07cfd866314ffad6eb9cc5b747813b16ae47477d77f9c" Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.723439 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerStarted","Data":"3944f77ceed2054b46f9831d9f21bdaa9181051c99718c8a2df926554ae9571c"} Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.723629 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8s8pc" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" containerID="cri-o://3944f77ceed2054b46f9831d9f21bdaa9181051c99718c8a2df926554ae9571c" gracePeriod=30 Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.731473 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7287f_887d596e-c519-4bfa-af90-3edd9e1b2f0f/extract-content/1.log" Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.732078 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7287f" event={"ID":"887d596e-c519-4bfa-af90-3edd9e1b2f0f","Type":"ContainerDied","Data":"19df70af114a22a2e4f781c3e6b786a3deea68547ebf97fc2244ab3d01397145"} Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.732152 3556 scope.go:117] "RemoveContainer" containerID="0f19d2f1712f9450cd77baf638932f67c27d1f4518c975033e17a85c9f709684" Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.732368 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7287f" Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.735546 3556 generic.go:334] "Generic (PLEG): container finished" podID="b043bb6a-7727-4c8f-8fc4-64660e345ec4" containerID="8b9d5ed6ecaf5cb1189407632ad1331c49085c5972c198a86bc81f10ecc752c9" exitCode=0 Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.735583 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" event={"ID":"b043bb6a-7727-4c8f-8fc4-64660e345ec4","Type":"ContainerDied","Data":"8b9d5ed6ecaf5cb1189407632ad1331c49085c5972c198a86bc81f10ecc752c9"} Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.737165 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sdddl" event={"ID":"fc9c9ba0-fcbb-4e78-8cf5-a059ec435760","Type":"ContainerDied","Data":"21b5011a009265c36f74b035721ee71dcca64dce00dcee67a1912391382bdc41"} Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.740515 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztgxm" event={"ID":"636f4587-587c-4c55-8f7f-8722b05f3bf5","Type":"ContainerStarted","Data":"5ba7d3926b6155f4a00041a95c5bffb40456f94fb544bc595fb8d2747e4faf09"} Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.740791 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ztgxm" podUID="636f4587-587c-4c55-8f7f-8722b05f3bf5" containerName="extract-content" containerID="cri-o://5ba7d3926b6155f4a00041a95c5bffb40456f94fb544bc595fb8d2747e4faf09" gracePeriod=30 Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.747080 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" event={"ID":"3482be94-0cdb-4e2a-889b-e5fac59fdbf5","Type":"ContainerDied","Data":"eea601870eb6dbafffccd3eb3b1ead953886969a6f1fe9e21f3b501a15d89bb4"} Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.747155 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-8b455464d-f9xdt" Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.766716 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" event={"ID":"57b23f79-74b4-4ba9-bf50-aeaa322b31df","Type":"ContainerStarted","Data":"e2cdc462fce32d6d9f87cbe36c9dd7d6c83b4d1f527804b59c14e48a8848792f"} Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.767128 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8jhz6" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" containerID="cri-o://ecda30d91c518d762f9d808359ce1f678ce75a5920e5e49591d12a906f39246a" gracePeriod=30 Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.767236 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4jkp" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" containerID="cri-o://ec5ed9e559fe9e57bd17180538d0d430c9e1d35fe59ac0fddef4719a7fbbca91" gracePeriod=30 Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.767985 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.774151 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.867207 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.870339 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-8b455464d-f9xdt"] Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.905625 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-8b455464d-xd2kb" podStartSLOduration=9.905580064 podStartE2EDuration="9.905580064s" podCreationTimestamp="2025-11-28 00:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:16:27.87980082 +0000 UTC m=+249.472032830" watchObservedRunningTime="2025-11-28 00:16:27.905580064 +0000 UTC m=+249.497812064" Nov 28 00:16:27 crc kubenswrapper[3556]: I1128 00:16:27.941920 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29404800-brn7x" podStartSLOduration=209.941764638 podStartE2EDuration="3m29.941764638s" podCreationTimestamp="2025-11-28 00:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:16:27.939418793 +0000 UTC m=+249.531650793" watchObservedRunningTime="2025-11-28 00:16:27.941764638 +0000 UTC m=+249.533996638" Nov 28 00:16:28 crc kubenswrapper[3556]: I1128 00:16:28.926856 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" path="/var/lib/kubelet/pods/3482be94-0cdb-4e2a-889b-e5fac59fdbf5/volumes" Nov 28 00:16:28 crc kubenswrapper[3556]: I1128 00:16:28.947121 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.848755 3556 scope.go:117] "RemoveContainer" containerID="efb15e1bc3010a2eaba8de1d315ede99d3b07b6163cd4e47c5d1f9c95bc5e6ec" Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.933247 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.940667 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.944552 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.978189 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcgsg\" (UniqueName: \"kubernetes.io/projected/6ff4c74d-d051-42b5-b30b-75580e80299d-kube-api-access-qcgsg\") pod \"6ff4c74d-d051-42b5-b30b-75580e80299d\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.978255 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-utilities\") pod \"6ff4c74d-d051-42b5-b30b-75580e80299d\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.978288 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-catalog-content\") pod \"6ff4c74d-d051-42b5-b30b-75580e80299d\" (UID: \"6ff4c74d-d051-42b5-b30b-75580e80299d\") " Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.979096 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ff4c74d-d051-42b5-b30b-75580e80299d" (UID: "6ff4c74d-d051-42b5-b30b-75580e80299d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.979883 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-utilities" (OuterVolumeSpecName: "utilities") pod "6ff4c74d-d051-42b5-b30b-75580e80299d" (UID: "6ff4c74d-d051-42b5-b30b-75580e80299d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:29 crc kubenswrapper[3556]: I1128 00:16:29.985230 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff4c74d-d051-42b5-b30b-75580e80299d-kube-api-access-qcgsg" (OuterVolumeSpecName: "kube-api-access-qcgsg") pod "6ff4c74d-d051-42b5-b30b-75580e80299d" (UID: "6ff4c74d-d051-42b5-b30b-75580e80299d"). InnerVolumeSpecName "kube-api-access-qcgsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.079662 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-catalog-content\") pod \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.079740 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-utilities\") pod \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.079789 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9p87\" (UniqueName: \"kubernetes.io/projected/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-kube-api-access-m9p87\") pod \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\" (UID: \"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2\") " Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.079832 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b043bb6a-7727-4c8f-8fc4-64660e345ec4-config-volume\") pod \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.079975 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b043bb6a-7727-4c8f-8fc4-64660e345ec4-secret-volume\") pod \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.080072 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsxw9\" (UniqueName: \"kubernetes.io/projected/b043bb6a-7727-4c8f-8fc4-64660e345ec4-kube-api-access-zsxw9\") pod \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\" (UID: \"b043bb6a-7727-4c8f-8fc4-64660e345ec4\") " Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.080152 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" (UID: "949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.080602 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-utilities" (OuterVolumeSpecName: "utilities") pod "949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" (UID: "949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.080649 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.080735 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qcgsg\" (UniqueName: \"kubernetes.io/projected/6ff4c74d-d051-42b5-b30b-75580e80299d-kube-api-access-qcgsg\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.080755 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.080768 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff4c74d-d051-42b5-b30b-75580e80299d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.081343 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b043bb6a-7727-4c8f-8fc4-64660e345ec4-config-volume" (OuterVolumeSpecName: "config-volume") pod "b043bb6a-7727-4c8f-8fc4-64660e345ec4" (UID: "b043bb6a-7727-4c8f-8fc4-64660e345ec4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.084594 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-kube-api-access-m9p87" (OuterVolumeSpecName: "kube-api-access-m9p87") pod "949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" (UID: "949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2"). InnerVolumeSpecName "kube-api-access-m9p87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.085078 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b043bb6a-7727-4c8f-8fc4-64660e345ec4-kube-api-access-zsxw9" (OuterVolumeSpecName: "kube-api-access-zsxw9") pod "b043bb6a-7727-4c8f-8fc4-64660e345ec4" (UID: "b043bb6a-7727-4c8f-8fc4-64660e345ec4"). InnerVolumeSpecName "kube-api-access-zsxw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.086367 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b043bb6a-7727-4c8f-8fc4-64660e345ec4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b043bb6a-7727-4c8f-8fc4-64660e345ec4" (UID: "b043bb6a-7727-4c8f-8fc4-64660e345ec4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.182301 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zsxw9\" (UniqueName: \"kubernetes.io/projected/b043bb6a-7727-4c8f-8fc4-64660e345ec4-kube-api-access-zsxw9\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.182367 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.182391 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-m9p87\" (UniqueName: \"kubernetes.io/projected/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2-kube-api-access-m9p87\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.182411 3556 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b043bb6a-7727-4c8f-8fc4-64660e345ec4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.182431 3556 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b043bb6a-7727-4c8f-8fc4-64660e345ec4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.789762 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28fk8" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.789841 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28fk8" event={"ID":"949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2","Type":"ContainerDied","Data":"387bb41983e1de16d2a9eb898095afeff2a100a3be5c13d7fe4a56fba89fac42"} Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.789935 3556 scope.go:117] "RemoveContainer" containerID="f130c785cd1e052ccbf25fd3edb68446feb675331fa234d31a7119c93ad64a08" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.793339 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" event={"ID":"b043bb6a-7727-4c8f-8fc4-64660e345ec4","Type":"ContainerDied","Data":"123a7f2a8f030c689f462e1dcc09a56cabe6b53967660afcb03c51ddc1557a0a"} Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.793377 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="123a7f2a8f030c689f462e1dcc09a56cabe6b53967660afcb03c51ddc1557a0a" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.793507 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404815-8hvvc" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.797284 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ztgxm_636f4587-587c-4c55-8f7f-8722b05f3bf5/extract-content/0.log" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.799217 3556 generic.go:334] "Generic (PLEG): container finished" podID="636f4587-587c-4c55-8f7f-8722b05f3bf5" containerID="5ba7d3926b6155f4a00041a95c5bffb40456f94fb544bc595fb8d2747e4faf09" exitCode=2 Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.799562 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztgxm" event={"ID":"636f4587-587c-4c55-8f7f-8722b05f3bf5","Type":"ContainerDied","Data":"5ba7d3926b6155f4a00041a95c5bffb40456f94fb544bc595fb8d2747e4faf09"} Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.802077 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.802623 3556 generic.go:334] "Generic (PLEG): container finished" podID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerID="ecda30d91c518d762f9d808359ce1f678ce75a5920e5e49591d12a906f39246a" exitCode=2 Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.802726 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"ecda30d91c518d762f9d808359ce1f678ce75a5920e5e49591d12a906f39246a"} Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.812434 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qq2b" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.812586 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qq2b" event={"ID":"6ff4c74d-d051-42b5-b30b-75580e80299d","Type":"ContainerDied","Data":"ba9f991522ebdaabd44d80123c26b8b8ccfd29b8da37a0f1275044aba285022f"} Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.814603 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.815028 3556 generic.go:334] "Generic (PLEG): container finished" podID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerID="ec5ed9e559fe9e57bd17180538d0d430c9e1d35fe59ac0fddef4719a7fbbca91" exitCode=2 Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.815085 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"ec5ed9e559fe9e57bd17180538d0d430c9e1d35fe59ac0fddef4719a7fbbca91"} Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.821323 3556 generic.go:334] "Generic (PLEG): container finished" podID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerID="3944f77ceed2054b46f9831d9f21bdaa9181051c99718c8a2df926554ae9571c" exitCode=0 Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.821385 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"3944f77ceed2054b46f9831d9f21bdaa9181051c99718c8a2df926554ae9571c"} Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.851914 3556 scope.go:117] "RemoveContainer" containerID="556b0ec51ac4baeb53677487eddbbf004c813e347e1a6e4839b10bb7417d1f5f" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.890415 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28fk8"] Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.896116 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-28fk8"] Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.910901 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qq2b"] Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.918723 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" path="/var/lib/kubelet/pods/949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2/volumes" Nov 28 00:16:30 crc kubenswrapper[3556]: I1128 00:16:30.922423 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4qq2b"] Nov 28 00:16:31 crc kubenswrapper[3556]: I1128 00:16:31.044498 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Nov 28 00:16:31 crc kubenswrapper[3556]: I1128 00:16:31.049464 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251920-wcws2"] Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.108685 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ztgxm_636f4587-587c-4c55-8f7f-8722b05f3bf5/extract-content/0.log" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.110101 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.189996 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.190330 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.217086 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4r7g\" (UniqueName: \"kubernetes.io/projected/636f4587-587c-4c55-8f7f-8722b05f3bf5-kube-api-access-h4r7g\") pod \"636f4587-587c-4c55-8f7f-8722b05f3bf5\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.217181 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-utilities\") pod \"636f4587-587c-4c55-8f7f-8722b05f3bf5\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.217277 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-catalog-content\") pod \"636f4587-587c-4c55-8f7f-8722b05f3bf5\" (UID: \"636f4587-587c-4c55-8f7f-8722b05f3bf5\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.221609 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-utilities" (OuterVolumeSpecName: "utilities") pod "636f4587-587c-4c55-8f7f-8722b05f3bf5" (UID: "636f4587-587c-4c55-8f7f-8722b05f3bf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.223185 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636f4587-587c-4c55-8f7f-8722b05f3bf5-kube-api-access-h4r7g" (OuterVolumeSpecName: "kube-api-access-h4r7g") pod "636f4587-587c-4c55-8f7f-8722b05f3bf5" (UID: "636f4587-587c-4c55-8f7f-8722b05f3bf5"). InnerVolumeSpecName "kube-api-access-h4r7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.234241 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.283111 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.283778 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.318690 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.319065 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.319910 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.319819 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities" (OuterVolumeSpecName: "utilities") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.321923 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities" (OuterVolumeSpecName: "utilities") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.325202 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.325317 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") pod \"4092a9f8-5acc-4932-9e90-ef962eeb301a\" (UID: \"4092a9f8-5acc-4932-9e90-ef962eeb301a\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.325366 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") pod \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\" (UID: \"c782cf62-a827-4677-b3c2-6f82c5f09cbb\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.325666 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.325684 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.325695 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h4r7g\" (UniqueName: \"kubernetes.io/projected/636f4587-587c-4c55-8f7f-8722b05f3bf5-kube-api-access-h4r7g\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.325708 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.328756 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb" (OuterVolumeSpecName: "kube-api-access-ptdrb") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "kube-api-access-ptdrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.330409 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r" (OuterVolumeSpecName: "kube-api-access-tf29r") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "kube-api-access-tf29r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.427148 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.427210 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.427307 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") pod \"3f4dca86-e6ee-4ec9-8324-86aff960225e\" (UID: \"3f4dca86-e6ee-4ec9-8324-86aff960225e\") " Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.427626 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ptdrb\" (UniqueName: \"kubernetes.io/projected/4092a9f8-5acc-4932-9e90-ef962eeb301a-kube-api-access-ptdrb\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.427656 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tf29r\" (UniqueName: \"kubernetes.io/projected/c782cf62-a827-4677-b3c2-6f82c5f09cbb-kube-api-access-tf29r\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.429031 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities" (OuterVolumeSpecName: "utilities") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.431551 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt" (OuterVolumeSpecName: "kube-api-access-n6sqt") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "kube-api-access-n6sqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.529876 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.529950 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-n6sqt\" (UniqueName: \"kubernetes.io/projected/3f4dca86-e6ee-4ec9-8324-86aff960225e-kube-api-access-n6sqt\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.839780 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jhz6_3f4dca86-e6ee-4ec9-8324-86aff960225e/extract-content/1.log" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.841438 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jhz6" event={"ID":"3f4dca86-e6ee-4ec9-8324-86aff960225e","Type":"ContainerDied","Data":"33192eb70edfcf11231ecd532784f6f4f40b7e675b25362168b4ee9286c72af7"} Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.841465 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jhz6" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.841846 3556 scope.go:117] "RemoveContainer" containerID="ecda30d91c518d762f9d808359ce1f678ce75a5920e5e49591d12a906f39246a" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.844391 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4jkp_4092a9f8-5acc-4932-9e90-ef962eeb301a/extract-content/1.log" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.845554 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4jkp" event={"ID":"4092a9f8-5acc-4932-9e90-ef962eeb301a","Type":"ContainerDied","Data":"195f49fb02a42f994b8b3d334b8850f88e01f7d8d80d639b4762715c8f0289e5"} Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.845611 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4jkp" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.848533 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8s8pc" event={"ID":"c782cf62-a827-4677-b3c2-6f82c5f09cbb","Type":"ContainerDied","Data":"3bd726061ae1b87c232ae9ce371120c86f41320648ccfaa8a3d1cdd459309eae"} Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.848595 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8s8pc" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.850290 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ztgxm_636f4587-587c-4c55-8f7f-8722b05f3bf5/extract-content/0.log" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.850762 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ztgxm" event={"ID":"636f4587-587c-4c55-8f7f-8722b05f3bf5","Type":"ContainerDied","Data":"ddebb0e7d6e6fdcc0f5234eee5cdc66f50ab3e81de711dcb04811bbe03f5c863"} Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.850819 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ztgxm" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.885422 3556 scope.go:117] "RemoveContainer" containerID="779855bb99a063294e4c710e0bad766ae6f8ac833a3636bc665bf1362c86850b" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.921352 3556 scope.go:117] "RemoveContainer" containerID="ec5ed9e559fe9e57bd17180538d0d430c9e1d35fe59ac0fddef4719a7fbbca91" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.921918 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff4c74d-d051-42b5-b30b-75580e80299d" path="/var/lib/kubelet/pods/6ff4c74d-d051-42b5-b30b-75580e80299d/volumes" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.923136 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deaee4f4-7b7a-442d-99b7-c8ac62ef5f27" path="/var/lib/kubelet/pods/deaee4f4-7b7a-442d-99b7-c8ac62ef5f27/volumes" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.960877 3556 scope.go:117] "RemoveContainer" containerID="c5b0f5e0b45a9b90754773eb847b69b94dfbc8067427503505055ed085186bc5" Nov 28 00:16:32 crc kubenswrapper[3556]: I1128 00:16:32.996153 3556 scope.go:117] "RemoveContainer" containerID="3944f77ceed2054b46f9831d9f21bdaa9181051c99718c8a2df926554ae9571c" Nov 28 00:16:33 crc kubenswrapper[3556]: I1128 00:16:33.024468 3556 scope.go:117] "RemoveContainer" containerID="b80434ca98c3aefe56302e56b847d1b69f46aa28a5e758341619dac2c0dbd07e" Nov 28 00:16:33 crc kubenswrapper[3556]: I1128 00:16:33.071964 3556 scope.go:117] "RemoveContainer" containerID="37664d7f50a3df46c30fdeaad2899d1d2d3a541ea3bb29fe926dcd31232c7787" Nov 28 00:16:33 crc kubenswrapper[3556]: I1128 00:16:33.113422 3556 scope.go:117] "RemoveContainer" containerID="5ba7d3926b6155f4a00041a95c5bffb40456f94fb544bc595fb8d2747e4faf09" Nov 28 00:16:33 crc kubenswrapper[3556]: I1128 00:16:33.143250 3556 scope.go:117] "RemoveContainer" containerID="7693ae48dd7fe1baaf5a6aa91c7de341ac12854c533bb7ff4dd25a2c4f4a7fd0" Nov 28 00:16:34 crc kubenswrapper[3556]: E1128 00:16:34.198986 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[registry-storage], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" Nov 28 00:16:42 crc kubenswrapper[3556]: I1128 00:16:42.878762 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "636f4587-587c-4c55-8f7f-8722b05f3bf5" (UID: "636f4587-587c-4c55-8f7f-8722b05f3bf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:42 crc kubenswrapper[3556]: I1128 00:16:42.914177 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/636f4587-587c-4c55-8f7f-8722b05f3bf5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.103174 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztgxm"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.111423 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ztgxm"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.261377 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f4dca86-e6ee-4ec9-8324-86aff960225e" (UID: "3f4dca86-e6ee-4ec9-8324-86aff960225e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.291554 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4092a9f8-5acc-4932-9e90-ef962eeb301a" (UID: "4092a9f8-5acc-4932-9e90-ef962eeb301a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.320960 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4092a9f8-5acc-4932-9e90-ef962eeb301a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.321070 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f4dca86-e6ee-4ec9-8324-86aff960225e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.396191 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.399688 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4jkp"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.419643 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbgdp"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.419983 3556 topology_manager.go:215] "Topology Admit Handler" podUID="3fe93442-3fb2-4ae8-ade9-110f5702aa99" podNamespace="openshift-marketplace" podName="redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.420193 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.428335 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.428523 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.428764 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.428867 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.428927 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.428983 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6ff4c74d-d051-42b5-b30b-75580e80299d" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.429069 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff4c74d-d051-42b5-b30b-75580e80299d" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.429137 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.429194 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.429371 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.429446 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.429509 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430119 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430169 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="636f4587-587c-4c55-8f7f-8722b05f3bf5" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430181 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="636f4587-587c-4c55-8f7f-8722b05f3bf5" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430198 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430209 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430232 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430244 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430261 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430273 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430290 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430301 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430322 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="636f4587-587c-4c55-8f7f-8722b05f3bf5" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430335 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="636f4587-587c-4c55-8f7f-8722b05f3bf5" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430376 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430389 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430404 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b043bb6a-7727-4c8f-8fc4-64660e345ec4" containerName="collect-profiles" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430416 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="b043bb6a-7727-4c8f-8fc4-64660e345ec4" containerName="collect-profiles" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430430 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430441 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: E1128 00:16:43.430457 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430468 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430736 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" containerName="registry-server" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430759 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3482be94-0cdb-4e2a-889b-e5fac59fdbf5" containerName="marketplace-operator" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430773 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff4c74d-d051-42b5-b30b-75580e80299d" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430791 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430805 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="949f9fd1-be5f-4542-ab7a-9a4ce2b3b8a2" containerName="extract-utilities" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430822 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="b043bb6a-7727-4c8f-8fc4-64660e345ec4" containerName="collect-profiles" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430836 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430854 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430867 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="636f4587-587c-4c55-8f7f-8722b05f3bf5" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.430882 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" containerName="extract-content" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.431700 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.431798 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.434349 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8jhz6"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.435403 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.444689 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbgdp"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.524983 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fe93442-3fb2-4ae8-ade9-110f5702aa99-utilities\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.525612 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fe93442-3fb2-4ae8-ade9-110f5702aa99-catalog-content\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.525857 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chxqn\" (UniqueName: \"kubernetes.io/projected/3fe93442-3fb2-4ae8-ade9-110f5702aa99-kube-api-access-chxqn\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.627796 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fe93442-3fb2-4ae8-ade9-110f5702aa99-utilities\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.627958 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fe93442-3fb2-4ae8-ade9-110f5702aa99-catalog-content\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.628037 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-chxqn\" (UniqueName: \"kubernetes.io/projected/3fe93442-3fb2-4ae8-ade9-110f5702aa99-kube-api-access-chxqn\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.628521 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fe93442-3fb2-4ae8-ade9-110f5702aa99-utilities\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.628671 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fe93442-3fb2-4ae8-ade9-110f5702aa99-catalog-content\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.659587 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-chxqn\" (UniqueName: \"kubernetes.io/projected/3fe93442-3fb2-4ae8-ade9-110f5702aa99-kube-api-access-chxqn\") pod \"redhat-operators-fbgdp\" (UID: \"3fe93442-3fb2-4ae8-ade9-110f5702aa99\") " pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.741777 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c782cf62-a827-4677-b3c2-6f82c5f09cbb" (UID: "c782cf62-a827-4677-b3c2-6f82c5f09cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.750647 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.830528 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c782cf62-a827-4677-b3c2-6f82c5f09cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.984502 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Nov 28 00:16:43 crc kubenswrapper[3556]: I1128 00:16:43.989390 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8s8pc"] Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.172471 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "887d596e-c519-4bfa-af90-3edd9e1b2f0f" (UID: "887d596e-c519-4bfa-af90-3edd9e1b2f0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.197205 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbgdp"] Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.224733 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" (UID: "fc9c9ba0-fcbb-4e78-8cf5-a059ec435760"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.237918 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/887d596e-c519-4bfa-af90-3edd9e1b2f0f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.237979 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.311025 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kbw72"] Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.311181 3556 topology_manager.go:215] "Topology Admit Handler" podUID="23e1da55-6d41-441d-9587-9b9c74e80d23" podNamespace="openshift-marketplace" podName="community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.312337 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.313405 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.321452 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7287f"] Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.325559 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kbw72"] Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.440296 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e1da55-6d41-441d-9587-9b9c74e80d23-utilities\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.440395 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e1da55-6d41-441d-9587-9b9c74e80d23-catalog-content\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.440436 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvq8\" (UniqueName: \"kubernetes.io/projected/23e1da55-6d41-441d-9587-9b9c74e80d23-kube-api-access-4pvq8\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.458406 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.463966 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sdddl"] Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.542105 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e1da55-6d41-441d-9587-9b9c74e80d23-utilities\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.542502 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e1da55-6d41-441d-9587-9b9c74e80d23-catalog-content\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.542584 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvq8\" (UniqueName: \"kubernetes.io/projected/23e1da55-6d41-441d-9587-9b9c74e80d23-kube-api-access-4pvq8\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.543189 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e1da55-6d41-441d-9587-9b9c74e80d23-catalog-content\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.543396 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e1da55-6d41-441d-9587-9b9c74e80d23-utilities\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.562548 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvq8\" (UniqueName: \"kubernetes.io/projected/23e1da55-6d41-441d-9587-9b9c74e80d23-kube-api-access-4pvq8\") pod \"community-operators-kbw72\" (UID: \"23e1da55-6d41-441d-9587-9b9c74e80d23\") " pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.631064 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.919805 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4dca86-e6ee-4ec9-8324-86aff960225e" path="/var/lib/kubelet/pods/3f4dca86-e6ee-4ec9-8324-86aff960225e/volumes" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.921240 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4092a9f8-5acc-4932-9e90-ef962eeb301a" path="/var/lib/kubelet/pods/4092a9f8-5acc-4932-9e90-ef962eeb301a/volumes" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.922725 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636f4587-587c-4c55-8f7f-8722b05f3bf5" path="/var/lib/kubelet/pods/636f4587-587c-4c55-8f7f-8722b05f3bf5/volumes" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.923451 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887d596e-c519-4bfa-af90-3edd9e1b2f0f" path="/var/lib/kubelet/pods/887d596e-c519-4bfa-af90-3edd9e1b2f0f/volumes" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.951662 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c782cf62-a827-4677-b3c2-6f82c5f09cbb" path="/var/lib/kubelet/pods/c782cf62-a827-4677-b3c2-6f82c5f09cbb/volumes" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.953305 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9c9ba0-fcbb-4e78-8cf5-a059ec435760" path="/var/lib/kubelet/pods/fc9c9ba0-fcbb-4e78-8cf5-a059ec435760/volumes" Nov 28 00:16:44 crc kubenswrapper[3556]: I1128 00:16:44.965644 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbgdp" event={"ID":"3fe93442-3fb2-4ae8-ade9-110f5702aa99","Type":"ContainerStarted","Data":"808163e8feb3bf5f0001a867a628762aedc7525af7d788c592096d780cfacc1f"} Nov 28 00:16:45 crc kubenswrapper[3556]: I1128 00:16:45.047632 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kbw72"] Nov 28 00:16:45 crc kubenswrapper[3556]: I1128 00:16:45.907355 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8zs8v"] Nov 28 00:16:45 crc kubenswrapper[3556]: I1128 00:16:45.907665 3556 topology_manager.go:215] "Topology Admit Handler" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" podNamespace="openshift-marketplace" podName="redhat-marketplace-8zs8v" Nov 28 00:16:45 crc kubenswrapper[3556]: I1128 00:16:45.908462 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:45 crc kubenswrapper[3556]: I1128 00:16:45.910512 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Nov 28 00:16:45 crc kubenswrapper[3556]: I1128 00:16:45.926893 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zs8v"] Nov 28 00:16:45 crc kubenswrapper[3556]: I1128 00:16:45.973809 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbw72" event={"ID":"23e1da55-6d41-441d-9587-9b9c74e80d23","Type":"ContainerStarted","Data":"4980f991890dacaf4102ce591bbf0083aacf3053e309b1a28f30f984a70291e2"} Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.050927 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-catalog-content\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.051061 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-utilities\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.051206 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69r5\" (UniqueName: \"kubernetes.io/projected/7a4a4778-a2d1-49b1-942b-0cf262013ba4-kube-api-access-p69r5\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.114987 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fqbbn"] Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.115147 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9ccee53e-7afd-4302-8b8e-5dfc9c4b5976" podNamespace="openshift-marketplace" podName="certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.116317 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.123609 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.124251 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqbbn"] Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.152587 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-utilities\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.152665 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-catalog-content\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.152755 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-p69r5\" (UniqueName: \"kubernetes.io/projected/7a4a4778-a2d1-49b1-942b-0cf262013ba4-kube-api-access-p69r5\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.152820 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-catalog-content\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.152851 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55g8\" (UniqueName: \"kubernetes.io/projected/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-kube-api-access-z55g8\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.152882 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-utilities\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.153456 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-utilities\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.153505 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-catalog-content\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.180555 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-p69r5\" (UniqueName: \"kubernetes.io/projected/7a4a4778-a2d1-49b1-942b-0cf262013ba4-kube-api-access-p69r5\") pod \"redhat-marketplace-8zs8v\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.223712 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.253841 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-z55g8\" (UniqueName: \"kubernetes.io/projected/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-kube-api-access-z55g8\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.254548 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-utilities\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.254599 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-catalog-content\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.255508 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-utilities\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.255622 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-catalog-content\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.273139 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55g8\" (UniqueName: \"kubernetes.io/projected/9ccee53e-7afd-4302-8b8e-5dfc9c4b5976-kube-api-access-z55g8\") pod \"certified-operators-fqbbn\" (UID: \"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976\") " pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.573139 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.632649 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zs8v"] Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.810316 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqbbn"] Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.980488 3556 generic.go:334] "Generic (PLEG): container finished" podID="23e1da55-6d41-441d-9587-9b9c74e80d23" containerID="cf08f16fbcb004c00b87713aba9513c94a70fd071c5ec3cfeee88d795114eb41" exitCode=0 Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.980571 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbw72" event={"ID":"23e1da55-6d41-441d-9587-9b9c74e80d23","Type":"ContainerDied","Data":"cf08f16fbcb004c00b87713aba9513c94a70fd071c5ec3cfeee88d795114eb41"} Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.983208 3556 generic.go:334] "Generic (PLEG): container finished" podID="3fe93442-3fb2-4ae8-ade9-110f5702aa99" containerID="7bfe96c9cd29dc3653dd3f97a5626ff148fdb3ebc2539eec9bfb30c1a78027b2" exitCode=0 Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.983271 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbgdp" event={"ID":"3fe93442-3fb2-4ae8-ade9-110f5702aa99","Type":"ContainerDied","Data":"7bfe96c9cd29dc3653dd3f97a5626ff148fdb3ebc2539eec9bfb30c1a78027b2"} Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.984443 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbbn" event={"ID":"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976","Type":"ContainerStarted","Data":"660f36b89add85dded76a751fb6efe94f40083b0c841ba3aa3471cb8dabc8c50"} Nov 28 00:16:46 crc kubenswrapper[3556]: I1128 00:16:46.985488 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zs8v" event={"ID":"7a4a4778-a2d1-49b1-942b-0cf262013ba4","Type":"ContainerStarted","Data":"36e4d63bd838f088645fadd0dbef55525b2fdbe0d6229cd4b5fc449c7088e7c6"} Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.512222 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-klz5m"] Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.512852 3556 topology_manager.go:215] "Topology Admit Handler" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" podNamespace="openshift-marketplace" podName="community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.514922 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.524670 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klz5m"] Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.560176 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbwt\" (UniqueName: \"kubernetes.io/projected/b7d96c18-6677-4596-b658-c24d25cd47e2-kube-api-access-gwbwt\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.560227 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-catalog-content\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.560373 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-utilities\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.661827 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbwt\" (UniqueName: \"kubernetes.io/projected/b7d96c18-6677-4596-b658-c24d25cd47e2-kube-api-access-gwbwt\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.661912 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-catalog-content\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.661981 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-utilities\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.662573 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-utilities\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.664920 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-catalog-content\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.683044 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbwt\" (UniqueName: \"kubernetes.io/projected/b7d96c18-6677-4596-b658-c24d25cd47e2-kube-api-access-gwbwt\") pod \"community-operators-klz5m\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:47 crc kubenswrapper[3556]: I1128 00:16:47.928447 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:16:48 crc kubenswrapper[3556]: I1128 00:16:48.000672 3556 generic.go:334] "Generic (PLEG): container finished" podID="9ccee53e-7afd-4302-8b8e-5dfc9c4b5976" containerID="2807b21b1ecb5c592e9b7a90247089a37599a54054d873ff616ccc91a5bab583" exitCode=0 Nov 28 00:16:48 crc kubenswrapper[3556]: I1128 00:16:48.000760 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbbn" event={"ID":"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976","Type":"ContainerDied","Data":"2807b21b1ecb5c592e9b7a90247089a37599a54054d873ff616ccc91a5bab583"} Nov 28 00:16:48 crc kubenswrapper[3556]: I1128 00:16:48.003351 3556 generic.go:334] "Generic (PLEG): container finished" podID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerID="a4446cf6d632203fec2dc2f4a665c4d5f6f7845e3b939f28951ebb1abcf24d76" exitCode=0 Nov 28 00:16:48 crc kubenswrapper[3556]: I1128 00:16:48.003399 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zs8v" event={"ID":"7a4a4778-a2d1-49b1-942b-0cf262013ba4","Type":"ContainerDied","Data":"a4446cf6d632203fec2dc2f4a665c4d5f6f7845e3b939f28951ebb1abcf24d76"} Nov 28 00:16:48 crc kubenswrapper[3556]: I1128 00:16:48.386045 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klz5m"] Nov 28 00:16:48 crc kubenswrapper[3556]: W1128 00:16:48.397421 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7d96c18_6677_4596_b658_c24d25cd47e2.slice/crio-b9ca258bac5a2104ea3f1852f245e9464e962065bf9f7a1e25bacca3bd8cf681 WatchSource:0}: Error finding container b9ca258bac5a2104ea3f1852f245e9464e962065bf9f7a1e25bacca3bd8cf681: Status 404 returned error can't find the container with id b9ca258bac5a2104ea3f1852f245e9464e962065bf9f7a1e25bacca3bd8cf681 Nov 28 00:16:48 crc kubenswrapper[3556]: I1128 00:16:48.913594 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:16:49 crc kubenswrapper[3556]: I1128 00:16:49.009927 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbbn" event={"ID":"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976","Type":"ContainerStarted","Data":"559b4596f12b9e0e1d8dba944f62292e5c4236f8ddc052a706d5fa9301ca57fb"} Nov 28 00:16:49 crc kubenswrapper[3556]: I1128 00:16:49.011859 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zs8v" event={"ID":"7a4a4778-a2d1-49b1-942b-0cf262013ba4","Type":"ContainerStarted","Data":"9bc0dcb69f1cd164a47ddc2af0bcd338fc02f94aa714e3112778d49bc05bc011"} Nov 28 00:16:49 crc kubenswrapper[3556]: I1128 00:16:49.013698 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbw72" event={"ID":"23e1da55-6d41-441d-9587-9b9c74e80d23","Type":"ContainerStarted","Data":"8c2f9d64624b1810fb05aadb2cb8ac146bb2c4bc7ee30d079a89d6fc68bec064"} Nov 28 00:16:49 crc kubenswrapper[3556]: I1128 00:16:49.017314 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbgdp" event={"ID":"3fe93442-3fb2-4ae8-ade9-110f5702aa99","Type":"ContainerStarted","Data":"69b874f26df66aca999b8a26fc51cbdcb85374cc30a62369542831d5cd430474"} Nov 28 00:16:49 crc kubenswrapper[3556]: I1128 00:16:49.018966 3556 generic.go:334] "Generic (PLEG): container finished" podID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerID="616a5f2b78880e3f333f1b82d04f2bde87d312ce1638ff89842f81ef9aa55650" exitCode=0 Nov 28 00:16:49 crc kubenswrapper[3556]: I1128 00:16:49.019265 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klz5m" event={"ID":"b7d96c18-6677-4596-b658-c24d25cd47e2","Type":"ContainerDied","Data":"616a5f2b78880e3f333f1b82d04f2bde87d312ce1638ff89842f81ef9aa55650"} Nov 28 00:16:49 crc kubenswrapper[3556]: I1128 00:16:49.019320 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klz5m" event={"ID":"b7d96c18-6677-4596-b658-c24d25cd47e2","Type":"ContainerStarted","Data":"b9ca258bac5a2104ea3f1852f245e9464e962065bf9f7a1e25bacca3bd8cf681"} Nov 28 00:16:50 crc kubenswrapper[3556]: I1128 00:16:50.905615 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:16:51 crc kubenswrapper[3556]: I1128 00:16:51.329372 3556 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 00:16:51 crc kubenswrapper[3556]: I1128 00:16:51.329882 3556 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6ea5f9a7192af1960ec8c50a86fd2d9a756dbf85695798868f611e04a03ec009/globalmount\"" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:16:51 crc kubenswrapper[3556]: I1128 00:16:51.413467 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75779c45fd-v2j2v\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:16:51 crc kubenswrapper[3556]: I1128 00:16:51.616651 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Nov 28 00:16:51 crc kubenswrapper[3556]: I1128 00:16:51.625706 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:16:52 crc kubenswrapper[3556]: I1128 00:16:52.035889 3556 generic.go:334] "Generic (PLEG): container finished" podID="9ccee53e-7afd-4302-8b8e-5dfc9c4b5976" containerID="559b4596f12b9e0e1d8dba944f62292e5c4236f8ddc052a706d5fa9301ca57fb" exitCode=0 Nov 28 00:16:52 crc kubenswrapper[3556]: I1128 00:16:52.037541 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbbn" event={"ID":"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976","Type":"ContainerDied","Data":"559b4596f12b9e0e1d8dba944f62292e5c4236f8ddc052a706d5fa9301ca57fb"} Nov 28 00:16:52 crc kubenswrapper[3556]: I1128 00:16:52.037820 3556 generic.go:334] "Generic (PLEG): container finished" podID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerID="9bc0dcb69f1cd164a47ddc2af0bcd338fc02f94aa714e3112778d49bc05bc011" exitCode=0 Nov 28 00:16:52 crc kubenswrapper[3556]: I1128 00:16:52.037859 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zs8v" event={"ID":"7a4a4778-a2d1-49b1-942b-0cf262013ba4","Type":"ContainerDied","Data":"9bc0dcb69f1cd164a47ddc2af0bcd338fc02f94aa714e3112778d49bc05bc011"} Nov 28 00:16:52 crc kubenswrapper[3556]: I1128 00:16:52.039372 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"c5d080b5c1c38e4e193c5f121b1fb946f0e797ddfc721176f38639d89b2b9bf5"} Nov 28 00:16:52 crc kubenswrapper[3556]: I1128 00:16:52.042237 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klz5m" event={"ID":"b7d96c18-6677-4596-b658-c24d25cd47e2","Type":"ContainerStarted","Data":"da5767b9b07d99da13039d1f9d30ff281991e2178c5ad735e494a700b34c8a34"} Nov 28 00:16:53 crc kubenswrapper[3556]: I1128 00:16:53.052154 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zs8v" event={"ID":"7a4a4778-a2d1-49b1-942b-0cf262013ba4","Type":"ContainerStarted","Data":"4c6d2c29f117acbf39f09ada99524e3dac8a0de8e96b417d647f6cfa3f7424af"} Nov 28 00:16:53 crc kubenswrapper[3556]: I1128 00:16:53.053520 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerStarted","Data":"a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458"} Nov 28 00:16:54 crc kubenswrapper[3556]: I1128 00:16:54.061962 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqbbn" event={"ID":"9ccee53e-7afd-4302-8b8e-5dfc9c4b5976","Type":"ContainerStarted","Data":"3484ff050567d14e5ffdd471bd599672a90a1c3fa1e04b87bcf72aed4af6052b"} Nov 28 00:16:54 crc kubenswrapper[3556]: I1128 00:16:54.062418 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:16:54 crc kubenswrapper[3556]: I1128 00:16:54.085066 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8zs8v" podStartSLOduration=4.771262466 podStartE2EDuration="9.084983805s" podCreationTimestamp="2025-11-28 00:16:45 +0000 UTC" firstStartedPulling="2025-11-28 00:16:48.013349359 +0000 UTC m=+269.605581369" lastFinishedPulling="2025-11-28 00:16:52.327070718 +0000 UTC m=+273.919302708" observedRunningTime="2025-11-28 00:16:54.084068813 +0000 UTC m=+275.676300823" watchObservedRunningTime="2025-11-28 00:16:54.084983805 +0000 UTC m=+275.677215795" Nov 28 00:16:54 crc kubenswrapper[3556]: I1128 00:16:54.112249 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fqbbn" podStartSLOduration=3.7493650130000002 podStartE2EDuration="8.112171493s" podCreationTimestamp="2025-11-28 00:16:46 +0000 UTC" firstStartedPulling="2025-11-28 00:16:48.002243014 +0000 UTC m=+269.594475024" lastFinishedPulling="2025-11-28 00:16:52.365049504 +0000 UTC m=+273.957281504" observedRunningTime="2025-11-28 00:16:54.110164995 +0000 UTC m=+275.702397005" watchObservedRunningTime="2025-11-28 00:16:54.112171493 +0000 UTC m=+275.704403493" Nov 28 00:16:56 crc kubenswrapper[3556]: I1128 00:16:56.224602 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:56 crc kubenswrapper[3556]: I1128 00:16:56.224992 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:56 crc kubenswrapper[3556]: I1128 00:16:56.515060 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:16:56 crc kubenswrapper[3556]: I1128 00:16:56.574048 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:56 crc kubenswrapper[3556]: I1128 00:16:56.578142 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:56 crc kubenswrapper[3556]: I1128 00:16:56.716316 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:57 crc kubenswrapper[3556]: I1128 00:16:57.078998 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log" Nov 28 00:16:57 crc kubenswrapper[3556]: I1128 00:16:57.079100 3556 generic.go:334] "Generic (PLEG): container finished" podID="7d51f445-054a-4e4f-a67b-a828f5a32511" containerID="065c9c87192408d819a036c6c7041c7be48f4f04b6c761caac69659c27ced1d9" exitCode=1 Nov 28 00:16:57 crc kubenswrapper[3556]: I1128 00:16:57.079165 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerDied","Data":"065c9c87192408d819a036c6c7041c7be48f4f04b6c761caac69659c27ced1d9"} Nov 28 00:16:57 crc kubenswrapper[3556]: I1128 00:16:57.079667 3556 scope.go:117] "RemoveContainer" containerID="065c9c87192408d819a036c6c7041c7be48f4f04b6c761caac69659c27ced1d9" Nov 28 00:16:57 crc kubenswrapper[3556]: I1128 00:16:57.081381 3556 generic.go:334] "Generic (PLEG): container finished" podID="23e1da55-6d41-441d-9587-9b9c74e80d23" containerID="8c2f9d64624b1810fb05aadb2cb8ac146bb2c4bc7ee30d079a89d6fc68bec064" exitCode=0 Nov 28 00:16:57 crc kubenswrapper[3556]: I1128 00:16:57.081462 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbw72" event={"ID":"23e1da55-6d41-441d-9587-9b9c74e80d23","Type":"ContainerDied","Data":"8c2f9d64624b1810fb05aadb2cb8ac146bb2c4bc7ee30d079a89d6fc68bec064"} Nov 28 00:16:58 crc kubenswrapper[3556]: I1128 00:16:58.212894 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fqbbn" Nov 28 00:16:59 crc kubenswrapper[3556]: I1128 00:16:59.095680 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-7d46d5bb6d-rrg6t_7d51f445-054a-4e4f-a67b-a828f5a32511/ingress-operator/4.log" Nov 28 00:16:59 crc kubenswrapper[3556]: I1128 00:16:59.095792 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-7d46d5bb6d-rrg6t" event={"ID":"7d51f445-054a-4e4f-a67b-a828f5a32511","Type":"ContainerStarted","Data":"a9f3d3ee298e06142e039063a6b49221b7f9c6ac2c13620af4902dcf17efd247"} Nov 28 00:16:59 crc kubenswrapper[3556]: I1128 00:16:59.098520 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kbw72" event={"ID":"23e1da55-6d41-441d-9587-9b9c74e80d23","Type":"ContainerStarted","Data":"ff31198493f02ba8cc92945d2c698497d74f30426db0c77bf7c406cd09439ada"} Nov 28 00:16:59 crc kubenswrapper[3556]: I1128 00:16:59.144663 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kbw72" podStartSLOduration=5.726948169 podStartE2EDuration="15.14460605s" podCreationTimestamp="2025-11-28 00:16:44 +0000 UTC" firstStartedPulling="2025-11-28 00:16:48.013360269 +0000 UTC m=+269.605592259" lastFinishedPulling="2025-11-28 00:16:57.43101815 +0000 UTC m=+279.023250140" observedRunningTime="2025-11-28 00:16:59.143309439 +0000 UTC m=+280.735541429" watchObservedRunningTime="2025-11-28 00:16:59.14460605 +0000 UTC m=+280.736838050" Nov 28 00:17:00 crc kubenswrapper[3556]: I1128 00:17:00.105751 3556 generic.go:334] "Generic (PLEG): container finished" podID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerID="da5767b9b07d99da13039d1f9d30ff281991e2178c5ad735e494a700b34c8a34" exitCode=0 Nov 28 00:17:00 crc kubenswrapper[3556]: I1128 00:17:00.105794 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klz5m" event={"ID":"b7d96c18-6677-4596-b658-c24d25cd47e2","Type":"ContainerDied","Data":"da5767b9b07d99da13039d1f9d30ff281991e2178c5ad735e494a700b34c8a34"} Nov 28 00:17:04 crc kubenswrapper[3556]: I1128 00:17:04.631847 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:17:04 crc kubenswrapper[3556]: I1128 00:17:04.632429 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:17:04 crc kubenswrapper[3556]: I1128 00:17:04.733842 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:17:05 crc kubenswrapper[3556]: I1128 00:17:05.248200 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kbw72" Nov 28 00:17:06 crc kubenswrapper[3556]: I1128 00:17:06.322536 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:17:07 crc kubenswrapper[3556]: I1128 00:17:07.662173 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-fqbbn" podUID="9ccee53e-7afd-4302-8b8e-5dfc9c4b5976" containerName="registry-server" probeResult="failure" output=< Nov 28 00:17:07 crc kubenswrapper[3556]: timeout: failed to connect service ":50051" within 1s Nov 28 00:17:07 crc kubenswrapper[3556]: > Nov 28 00:17:07 crc kubenswrapper[3556]: I1128 00:17:07.700763 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-fqbbn" podUID="9ccee53e-7afd-4302-8b8e-5dfc9c4b5976" containerName="registry-server" probeResult="failure" output=< Nov 28 00:17:07 crc kubenswrapper[3556]: timeout: failed to connect service ":50051" within 1s Nov 28 00:17:07 crc kubenswrapper[3556]: > Nov 28 00:17:11 crc kubenswrapper[3556]: I1128 00:17:11.636737 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:17:13 crc kubenswrapper[3556]: I1128 00:17:13.182242 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klz5m" event={"ID":"b7d96c18-6677-4596-b658-c24d25cd47e2","Type":"ContainerStarted","Data":"5ba7b48b4b4771a90e6fb9c8f956cbf7fcc670be749f4a145ee8beb73a3610d3"} Nov 28 00:17:13 crc kubenswrapper[3556]: I1128 00:17:13.202738 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-klz5m" podStartSLOduration=14.712492121 podStartE2EDuration="26.202692741s" podCreationTimestamp="2025-11-28 00:16:47 +0000 UTC" firstStartedPulling="2025-11-28 00:16:49.020357641 +0000 UTC m=+270.612589631" lastFinishedPulling="2025-11-28 00:17:00.510558261 +0000 UTC m=+282.102790251" observedRunningTime="2025-11-28 00:17:13.200217532 +0000 UTC m=+294.792449522" watchObservedRunningTime="2025-11-28 00:17:13.202692741 +0000 UTC m=+294.794924741" Nov 28 00:17:17 crc kubenswrapper[3556]: I1128 00:17:17.929067 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:17:17 crc kubenswrapper[3556]: I1128 00:17:17.929286 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.028628 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.295941 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.334766 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-klz5m"] Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.689794 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.689913 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.689959 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.689979 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.690065 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:17:18 crc kubenswrapper[3556]: E1128 00:17:18.868950 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff\": container with ID starting with a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff not found: ID does not exist" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" Nov 28 00:17:18 crc kubenswrapper[3556]: I1128 00:17:18.869026 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff" err="rpc error: code = NotFound desc = could not find container \"a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff\": container with ID starting with a56163bd96976ea74aba1c86f22da617d6a03538ac47eacc7910be637d7bf8ff not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.869473 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649\": container with ID starting with 79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649 not found: ID does not exist" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.869500 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649" err="rpc error: code = NotFound desc = could not find container \"79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649\": container with ID starting with 79c283f99efa65aebdd5c70a860e4be8de07c70a02e110724c8d177e28696649 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.870369 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843\": container with ID starting with 58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843 not found: ID does not exist" containerID="58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.870392 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843" err="rpc error: code = NotFound desc = could not find container \"58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843\": container with ID starting with 58b55f32eafae666203cdd6fbc4d2636fee478a2b24e4b57e1b52230cdf74843 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.870760 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786\": container with ID starting with f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786 not found: ID does not exist" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.870780 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786" err="rpc error: code = NotFound desc = could not find container \"f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786\": container with ID starting with f432c7fb9551b92a90db75e3b1c003f4281525efd6e3f7f351865ef35c5ea786 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.871083 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6\": container with ID starting with 3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6 not found: ID does not exist" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.871109 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6" err="rpc error: code = NotFound desc = could not find container \"3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6\": container with ID starting with 3e919419d7e26f5e613ad3f3c9052fdc42524d23434e8deabbaeb09b182eb8f6 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.871396 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4\": container with ID starting with 96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4 not found: ID does not exist" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.871424 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4" err="rpc error: code = NotFound desc = could not find container \"96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4\": container with ID starting with 96a85267c5ac9e1059a54b9538ada7b67633a30ca7adf1d4d16cf6033471c5f4 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.871826 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636\": container with ID starting with 936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636 not found: ID does not exist" containerID="936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.871853 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636" err="rpc error: code = NotFound desc = could not find container \"936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636\": container with ID starting with 936c532d2ea4335be6418d05f1cceffee6284c4c1f755194bb383a6e75f88636 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.872233 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f\": container with ID starting with 821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f not found: ID does not exist" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.872261 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f" err="rpc error: code = NotFound desc = could not find container \"821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f\": container with ID starting with 821137b1cd0b6ecccbd1081c1b451b19bfea6dd2e089a4b1001a6cdb31a4256f not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.876267 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9\": container with ID starting with 2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9 not found: ID does not exist" containerID="2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.876318 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9" err="rpc error: code = NotFound desc = could not find container \"2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9\": container with ID starting with 2f758649dde5a0955fe3ef141a27a7c8eea7852f10da149d3fc5720018c059f9 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.876882 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077\": container with ID starting with ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077 not found: ID does not exist" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.876949 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077" err="rpc error: code = NotFound desc = could not find container \"ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077\": container with ID starting with ba42ad15bc6c92353d4b7ae95deb709fa5499a0d5b16b9c9c6153679fed8f077 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.878595 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc\": container with ID starting with 0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc not found: ID does not exist" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.878622 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc" err="rpc error: code = NotFound desc = could not find container \"0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc\": container with ID starting with 0faea5dd6bb8aefd0e2039a30acf20b3bfe9e917754e8d9b2a898f4051a2c5dc not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.879791 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963\": container with ID starting with c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963 not found: ID does not exist" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.879815 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963" err="rpc error: code = NotFound desc = could not find container \"c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963\": container with ID starting with c74c246d46562df6bafe28139d83ae2ba55d3f0fc666dc8077050a654e246963 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.880341 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0\": container with ID starting with 955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0 not found: ID does not exist" containerID="955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.880359 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0" err="rpc error: code = NotFound desc = could not find container \"955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0\": container with ID starting with 955cfa5558a348b4ee35f6a2b6d73e526c9554a025e5023e0fb461373cb0f4d0 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.880691 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f\": container with ID starting with 319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f not found: ID does not exist" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.880712 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f" err="rpc error: code = NotFound desc = could not find container \"319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f\": container with ID starting with 319ec802f9a442097e69485c29cd0a5e07ea7f1fe43cf8778e08e37b4cf9f85f not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.881101 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8\": container with ID starting with 30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8 not found: ID does not exist" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.881182 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8" err="rpc error: code = NotFound desc = could not find container \"30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8\": container with ID starting with 30f87fc063214351a2d7f693b5af7355f78f438f8ce6d39d48f6177dfb07e5e8 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: E1128 00:17:18.881876 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8\": container with ID starting with bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8 not found: ID does not exist" containerID="bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:18.881917 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8" err="rpc error: code = NotFound desc = could not find container \"bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8\": container with ID starting with bacbddb576219793667d7bc1f3ccf593e0bd7c1662b2c71d8f1655ddbbcd82e8 not found: ID does not exist" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:20.221310 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-klz5m" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerName="registry-server" containerID="cri-o://5ba7b48b4b4771a90e6fb9c8f956cbf7fcc670be749f4a145ee8beb73a3610d3" gracePeriod=2 Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:22.236296 3556 generic.go:334] "Generic (PLEG): container finished" podID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerID="5ba7b48b4b4771a90e6fb9c8f956cbf7fcc670be749f4a145ee8beb73a3610d3" exitCode=0 Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:22.236382 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klz5m" event={"ID":"b7d96c18-6677-4596-b658-c24d25cd47e2","Type":"ContainerDied","Data":"5ba7b48b4b4771a90e6fb9c8f956cbf7fcc670be749f4a145ee8beb73a3610d3"} Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:22.851492 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:22.946577 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-catalog-content\") pod \"b7d96c18-6677-4596-b658-c24d25cd47e2\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:22.946630 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-utilities\") pod \"b7d96c18-6677-4596-b658-c24d25cd47e2\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:22.946668 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbwt\" (UniqueName: \"kubernetes.io/projected/b7d96c18-6677-4596-b658-c24d25cd47e2-kube-api-access-gwbwt\") pod \"b7d96c18-6677-4596-b658-c24d25cd47e2\" (UID: \"b7d96c18-6677-4596-b658-c24d25cd47e2\") " Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:22.948426 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-utilities" (OuterVolumeSpecName: "utilities") pod "b7d96c18-6677-4596-b658-c24d25cd47e2" (UID: "b7d96c18-6677-4596-b658-c24d25cd47e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:17:22 crc kubenswrapper[3556]: I1128 00:17:22.957476 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d96c18-6677-4596-b658-c24d25cd47e2-kube-api-access-gwbwt" (OuterVolumeSpecName: "kube-api-access-gwbwt") pod "b7d96c18-6677-4596-b658-c24d25cd47e2" (UID: "b7d96c18-6677-4596-b658-c24d25cd47e2"). InnerVolumeSpecName "kube-api-access-gwbwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:17:23 crc kubenswrapper[3556]: I1128 00:17:23.047990 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:17:23 crc kubenswrapper[3556]: I1128 00:17:23.048107 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gwbwt\" (UniqueName: \"kubernetes.io/projected/b7d96c18-6677-4596-b658-c24d25cd47e2-kube-api-access-gwbwt\") on node \"crc\" DevicePath \"\"" Nov 28 00:17:23 crc kubenswrapper[3556]: I1128 00:17:23.245483 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klz5m" event={"ID":"b7d96c18-6677-4596-b658-c24d25cd47e2","Type":"ContainerDied","Data":"b9ca258bac5a2104ea3f1852f245e9464e962065bf9f7a1e25bacca3bd8cf681"} Nov 28 00:17:23 crc kubenswrapper[3556]: I1128 00:17:23.245533 3556 scope.go:117] "RemoveContainer" containerID="5ba7b48b4b4771a90e6fb9c8f956cbf7fcc670be749f4a145ee8beb73a3610d3" Nov 28 00:17:23 crc kubenswrapper[3556]: I1128 00:17:23.245543 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klz5m" Nov 28 00:17:23 crc kubenswrapper[3556]: I1128 00:17:23.305671 3556 scope.go:117] "RemoveContainer" containerID="da5767b9b07d99da13039d1f9d30ff281991e2178c5ad735e494a700b34c8a34" Nov 28 00:17:29 crc kubenswrapper[3556]: I1128 00:17:29.046131 3556 scope.go:117] "RemoveContainer" containerID="616a5f2b78880e3f333f1b82d04f2bde87d312ce1638ff89842f81ef9aa55650" Nov 28 00:17:34 crc kubenswrapper[3556]: I1128 00:17:34.322395 3556 generic.go:334] "Generic (PLEG): container finished" podID="3fe93442-3fb2-4ae8-ade9-110f5702aa99" containerID="69b874f26df66aca999b8a26fc51cbdcb85374cc30a62369542831d5cd430474" exitCode=0 Nov 28 00:17:34 crc kubenswrapper[3556]: I1128 00:17:34.322534 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbgdp" event={"ID":"3fe93442-3fb2-4ae8-ade9-110f5702aa99","Type":"ContainerDied","Data":"69b874f26df66aca999b8a26fc51cbdcb85374cc30a62369542831d5cd430474"} Nov 28 00:17:37 crc kubenswrapper[3556]: I1128 00:17:37.085820 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7d96c18-6677-4596-b658-c24d25cd47e2" (UID: "b7d96c18-6677-4596-b658-c24d25cd47e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:17:37 crc kubenswrapper[3556]: I1128 00:17:37.155989 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7d96c18-6677-4596-b658-c24d25cd47e2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:17:37 crc kubenswrapper[3556]: I1128 00:17:37.339197 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbgdp" event={"ID":"3fe93442-3fb2-4ae8-ade9-110f5702aa99","Type":"ContainerStarted","Data":"808edef8455e454a0c36a8c4ebd610881b221c1c5588f6607d7809848f294afa"} Nov 28 00:17:37 crc kubenswrapper[3556]: I1128 00:17:37.371282 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbgdp" podStartSLOduration=7.617691649 podStartE2EDuration="54.371216681s" podCreationTimestamp="2025-11-28 00:16:43 +0000 UTC" firstStartedPulling="2025-11-28 00:16:48.013315728 +0000 UTC m=+269.605547718" lastFinishedPulling="2025-11-28 00:17:34.76684072 +0000 UTC m=+316.359072750" observedRunningTime="2025-11-28 00:17:37.366282214 +0000 UTC m=+318.958514294" watchObservedRunningTime="2025-11-28 00:17:37.371216681 +0000 UTC m=+318.963448741" Nov 28 00:17:37 crc kubenswrapper[3556]: I1128 00:17:37.395689 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-klz5m"] Nov 28 00:17:37 crc kubenswrapper[3556]: I1128 00:17:37.406212 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-klz5m"] Nov 28 00:17:38 crc kubenswrapper[3556]: I1128 00:17:38.924452 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" path="/var/lib/kubelet/pods/b7d96c18-6677-4596-b658-c24d25cd47e2/volumes" Nov 28 00:17:43 crc kubenswrapper[3556]: I1128 00:17:43.751820 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:17:43 crc kubenswrapper[3556]: I1128 00:17:43.752543 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:17:43 crc kubenswrapper[3556]: I1128 00:17:43.881898 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:17:44 crc kubenswrapper[3556]: I1128 00:17:44.480591 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbgdp" Nov 28 00:17:54 crc kubenswrapper[3556]: I1128 00:17:54.430123 3556 generic.go:334] "Generic (PLEG): container finished" podID="e3327d8e-10c1-403b-bad0-cfda7ae4295f" containerID="f2d6eb38dc59ae8a2d96942c5f58267be5af80a5e842d62608f42f96c773e017" exitCode=0 Nov 28 00:17:54 crc kubenswrapper[3556]: I1128 00:17:54.430207 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29404800-brn7x" event={"ID":"e3327d8e-10c1-403b-bad0-cfda7ae4295f","Type":"ContainerDied","Data":"f2d6eb38dc59ae8a2d96942c5f58267be5af80a5e842d62608f42f96c773e017"} Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.660323 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.691651 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vnr6\" (UniqueName: \"kubernetes.io/projected/e3327d8e-10c1-403b-bad0-cfda7ae4295f-kube-api-access-4vnr6\") pod \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\" (UID: \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\") " Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.692024 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3327d8e-10c1-403b-bad0-cfda7ae4295f-serviceca\") pod \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\" (UID: \"e3327d8e-10c1-403b-bad0-cfda7ae4295f\") " Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.692602 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3327d8e-10c1-403b-bad0-cfda7ae4295f-serviceca" (OuterVolumeSpecName: "serviceca") pod "e3327d8e-10c1-403b-bad0-cfda7ae4295f" (UID: "e3327d8e-10c1-403b-bad0-cfda7ae4295f"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.699488 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3327d8e-10c1-403b-bad0-cfda7ae4295f-kube-api-access-4vnr6" (OuterVolumeSpecName: "kube-api-access-4vnr6") pod "e3327d8e-10c1-403b-bad0-cfda7ae4295f" (UID: "e3327d8e-10c1-403b-bad0-cfda7ae4295f"). InnerVolumeSpecName "kube-api-access-4vnr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.792749 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4vnr6\" (UniqueName: \"kubernetes.io/projected/e3327d8e-10c1-403b-bad0-cfda7ae4295f-kube-api-access-4vnr6\") on node \"crc\" DevicePath \"\"" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.792809 3556 reconciler_common.go:300] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3327d8e-10c1-403b-bad0-cfda7ae4295f-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.995114 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-lv5bc"] Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.995239 3556 topology_manager.go:215] "Topology Admit Handler" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" podNamespace="openshift-multus" podName="cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:55 crc kubenswrapper[3556]: E1128 00:17:55.995394 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="e3327d8e-10c1-403b-bad0-cfda7ae4295f" containerName="image-pruner" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.995410 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3327d8e-10c1-403b-bad0-cfda7ae4295f" containerName="image-pruner" Nov 28 00:17:55 crc kubenswrapper[3556]: E1128 00:17:55.995426 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerName="extract-utilities" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.995436 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerName="extract-utilities" Nov 28 00:17:55 crc kubenswrapper[3556]: E1128 00:17:55.995455 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerName="registry-server" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.995463 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerName="registry-server" Nov 28 00:17:55 crc kubenswrapper[3556]: E1128 00:17:55.995482 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerName="extract-content" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.995491 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerName="extract-content" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.995614 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3327d8e-10c1-403b-bad0-cfda7ae4295f" containerName="image-pruner" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.995640 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d96c18-6677-4596-b658-c24d25cd47e2" containerName="registry-server" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.996308 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.998739 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-smth4" Nov 28 00:17:55 crc kubenswrapper[3556]: I1128 00:17:55.998739 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.097211 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04cb5b9b-8a43-4d87-a03f-a64375e394e9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.097295 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04cb5b9b-8a43-4d87-a03f-a64375e394e9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.097318 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59d54\" (UniqueName: \"kubernetes.io/projected/04cb5b9b-8a43-4d87-a03f-a64375e394e9-kube-api-access-59d54\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.097429 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04cb5b9b-8a43-4d87-a03f-a64375e394e9-ready\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.198677 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04cb5b9b-8a43-4d87-a03f-a64375e394e9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.199040 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04cb5b9b-8a43-4d87-a03f-a64375e394e9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.198878 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04cb5b9b-8a43-4d87-a03f-a64375e394e9-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.199159 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-59d54\" (UniqueName: \"kubernetes.io/projected/04cb5b9b-8a43-4d87-a03f-a64375e394e9-kube-api-access-59d54\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.199246 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04cb5b9b-8a43-4d87-a03f-a64375e394e9-ready\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.199629 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04cb5b9b-8a43-4d87-a03f-a64375e394e9-ready\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.199832 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04cb5b9b-8a43-4d87-a03f-a64375e394e9-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.215251 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-59d54\" (UniqueName: \"kubernetes.io/projected/04cb5b9b-8a43-4d87-a03f-a64375e394e9-kube-api-access-59d54\") pod \"cni-sysctl-allowlist-ds-lv5bc\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.311593 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:56 crc kubenswrapper[3556]: W1128 00:17:56.330299 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04cb5b9b_8a43_4d87_a03f_a64375e394e9.slice/crio-b86a0c57879e729fe9cd654c444525a23a5243c8bf0906a1ca5ce3daa7bf9b41 WatchSource:0}: Error finding container b86a0c57879e729fe9cd654c444525a23a5243c8bf0906a1ca5ce3daa7bf9b41: Status 404 returned error can't find the container with id b86a0c57879e729fe9cd654c444525a23a5243c8bf0906a1ca5ce3daa7bf9b41 Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.440731 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" event={"ID":"04cb5b9b-8a43-4d87-a03f-a64375e394e9","Type":"ContainerStarted","Data":"b86a0c57879e729fe9cd654c444525a23a5243c8bf0906a1ca5ce3daa7bf9b41"} Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.442394 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29404800-brn7x" event={"ID":"e3327d8e-10c1-403b-bad0-cfda7ae4295f","Type":"ContainerDied","Data":"341f3616f1d67dfc39bb55ac54df6390cc286dd49bbb2517250b899c082435e2"} Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.442420 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="341f3616f1d67dfc39bb55ac54df6390cc286dd49bbb2517250b899c082435e2" Nov 28 00:17:56 crc kubenswrapper[3556]: I1128 00:17:56.442449 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29404800-brn7x" Nov 28 00:17:57 crc kubenswrapper[3556]: I1128 00:17:57.448917 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" event={"ID":"04cb5b9b-8a43-4d87-a03f-a64375e394e9","Type":"ContainerStarted","Data":"e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810"} Nov 28 00:17:57 crc kubenswrapper[3556]: I1128 00:17:57.449371 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:57 crc kubenswrapper[3556]: I1128 00:17:57.468191 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" podStartSLOduration=2.468116147 podStartE2EDuration="2.468116147s" podCreationTimestamp="2025-11-28 00:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:17:57.467163972 +0000 UTC m=+339.059395962" watchObservedRunningTime="2025-11-28 00:17:57.468116147 +0000 UTC m=+339.060348197" Nov 28 00:17:57 crc kubenswrapper[3556]: I1128 00:17:57.520261 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:17:58 crc kubenswrapper[3556]: I1128 00:17:58.010297 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-lv5bc"] Nov 28 00:17:59 crc kubenswrapper[3556]: I1128 00:17:59.457174 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" gracePeriod=30 Nov 28 00:18:06 crc kubenswrapper[3556]: E1128 00:18:06.313336 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:06 crc kubenswrapper[3556]: E1128 00:18:06.315289 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:06 crc kubenswrapper[3556]: E1128 00:18:06.316719 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:06 crc kubenswrapper[3556]: E1128 00:18:06.316757 3556 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" containerName="kube-multus-additional-cni-plugins" Nov 28 00:18:16 crc kubenswrapper[3556]: E1128 00:18:16.313919 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:16 crc kubenswrapper[3556]: E1128 00:18:16.316186 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:16 crc kubenswrapper[3556]: E1128 00:18:16.317720 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:16 crc kubenswrapper[3556]: E1128 00:18:16.317781 3556 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" containerName="kube-multus-additional-cni-plugins" Nov 28 00:18:18 crc kubenswrapper[3556]: I1128 00:18:18.690752 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:18:18 crc kubenswrapper[3556]: I1128 00:18:18.691547 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:18:18 crc kubenswrapper[3556]: I1128 00:18:18.691591 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:18:18 crc kubenswrapper[3556]: I1128 00:18:18.691614 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:18:18 crc kubenswrapper[3556]: I1128 00:18:18.691647 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:18:22 crc kubenswrapper[3556]: I1128 00:18:22.663616 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:18:22 crc kubenswrapper[3556]: I1128 00:18:22.664176 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:18:26 crc kubenswrapper[3556]: E1128 00:18:26.314371 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:26 crc kubenswrapper[3556]: E1128 00:18:26.316069 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:26 crc kubenswrapper[3556]: E1128 00:18:26.318088 3556 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" cmd=["/bin/bash","-c","test -f /ready/ready"] Nov 28 00:18:26 crc kubenswrapper[3556]: E1128 00:18:26.318188 3556 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" containerName="kube-multus-additional-cni-plugins" Nov 28 00:18:29 crc kubenswrapper[3556]: E1128 00:18:29.604535 3556 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04cb5b9b_8a43_4d87_a03f_a64375e394e9.slice/crio-conmon-e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810.scope\": RecentStats: unable to find data in memory cache]" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.607869 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-lv5bc_04cb5b9b-8a43-4d87-a03f-a64375e394e9/kube-multus-additional-cni-plugins/0.log" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.607915 3556 generic.go:334] "Generic (PLEG): container finished" podID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" exitCode=137 Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.607938 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" event={"ID":"04cb5b9b-8a43-4d87-a03f-a64375e394e9","Type":"ContainerDied","Data":"e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810"} Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.607959 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" event={"ID":"04cb5b9b-8a43-4d87-a03f-a64375e394e9","Type":"ContainerDied","Data":"b86a0c57879e729fe9cd654c444525a23a5243c8bf0906a1ca5ce3daa7bf9b41"} Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.607969 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86a0c57879e729fe9cd654c444525a23a5243c8bf0906a1ca5ce3daa7bf9b41" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.621858 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-lv5bc_04cb5b9b-8a43-4d87-a03f-a64375e394e9/kube-multus-additional-cni-plugins/0.log" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.622220 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.701528 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04cb5b9b-8a43-4d87-a03f-a64375e394e9-tuning-conf-dir\") pod \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.701600 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59d54\" (UniqueName: \"kubernetes.io/projected/04cb5b9b-8a43-4d87-a03f-a64375e394e9-kube-api-access-59d54\") pod \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.701622 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04cb5b9b-8a43-4d87-a03f-a64375e394e9-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "04cb5b9b-8a43-4d87-a03f-a64375e394e9" (UID: "04cb5b9b-8a43-4d87-a03f-a64375e394e9"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.701643 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04cb5b9b-8a43-4d87-a03f-a64375e394e9-ready\") pod \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.701799 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04cb5b9b-8a43-4d87-a03f-a64375e394e9-cni-sysctl-allowlist\") pod \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\" (UID: \"04cb5b9b-8a43-4d87-a03f-a64375e394e9\") " Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.702143 3556 reconciler_common.go:300] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04cb5b9b-8a43-4d87-a03f-a64375e394e9-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.702197 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04cb5b9b-8a43-4d87-a03f-a64375e394e9-ready" (OuterVolumeSpecName: "ready") pod "04cb5b9b-8a43-4d87-a03f-a64375e394e9" (UID: "04cb5b9b-8a43-4d87-a03f-a64375e394e9"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.702548 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04cb5b9b-8a43-4d87-a03f-a64375e394e9-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "04cb5b9b-8a43-4d87-a03f-a64375e394e9" (UID: "04cb5b9b-8a43-4d87-a03f-a64375e394e9"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.706281 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04cb5b9b-8a43-4d87-a03f-a64375e394e9-kube-api-access-59d54" (OuterVolumeSpecName: "kube-api-access-59d54") pod "04cb5b9b-8a43-4d87-a03f-a64375e394e9" (UID: "04cb5b9b-8a43-4d87-a03f-a64375e394e9"). InnerVolumeSpecName "kube-api-access-59d54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.803007 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-59d54\" (UniqueName: \"kubernetes.io/projected/04cb5b9b-8a43-4d87-a03f-a64375e394e9-kube-api-access-59d54\") on node \"crc\" DevicePath \"\"" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.803108 3556 reconciler_common.go:300] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04cb5b9b-8a43-4d87-a03f-a64375e394e9-ready\") on node \"crc\" DevicePath \"\"" Nov 28 00:18:29 crc kubenswrapper[3556]: I1128 00:18:29.803140 3556 reconciler_common.go:300] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04cb5b9b-8a43-4d87-a03f-a64375e394e9-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 00:18:30 crc kubenswrapper[3556]: I1128 00:18:30.611472 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-lv5bc" Nov 28 00:18:30 crc kubenswrapper[3556]: I1128 00:18:30.636686 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-lv5bc"] Nov 28 00:18:30 crc kubenswrapper[3556]: I1128 00:18:30.639706 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-lv5bc"] Nov 28 00:18:30 crc kubenswrapper[3556]: I1128 00:18:30.921726 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" path="/var/lib/kubelet/pods/04cb5b9b-8a43-4d87-a03f-a64375e394e9/volumes" Nov 28 00:18:52 crc kubenswrapper[3556]: I1128 00:18:52.664521 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:18:52 crc kubenswrapper[3556]: I1128 00:18:52.664921 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:19:18 crc kubenswrapper[3556]: I1128 00:19:18.692054 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:19:18 crc kubenswrapper[3556]: I1128 00:19:18.692899 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:19:18 crc kubenswrapper[3556]: I1128 00:19:18.692963 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:19:18 crc kubenswrapper[3556]: I1128 00:19:18.693082 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:19:18 crc kubenswrapper[3556]: I1128 00:19:18.693160 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:19:22 crc kubenswrapper[3556]: I1128 00:19:22.664780 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:19:22 crc kubenswrapper[3556]: I1128 00:19:22.665439 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:19:22 crc kubenswrapper[3556]: I1128 00:19:22.665493 3556 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:19:22 crc kubenswrapper[3556]: I1128 00:19:22.666548 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"acafa606c4aa1bb9f7edfa1daf5c757ca7084d520498133fa4c1d1f00743db14"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 00:19:22 crc kubenswrapper[3556]: I1128 00:19:22.666881 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://acafa606c4aa1bb9f7edfa1daf5c757ca7084d520498133fa4c1d1f00743db14" gracePeriod=600 Nov 28 00:19:22 crc kubenswrapper[3556]: I1128 00:19:22.893945 3556 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="acafa606c4aa1bb9f7edfa1daf5c757ca7084d520498133fa4c1d1f00743db14" exitCode=0 Nov 28 00:19:22 crc kubenswrapper[3556]: I1128 00:19:22.893999 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"acafa606c4aa1bb9f7edfa1daf5c757ca7084d520498133fa4c1d1f00743db14"} Nov 28 00:19:22 crc kubenswrapper[3556]: I1128 00:19:22.894058 3556 scope.go:117] "RemoveContainer" containerID="5825caecff59ec411acfa2888077a9dd43f86687eece88fb8f014b10c1a3740e" Nov 28 00:19:23 crc kubenswrapper[3556]: I1128 00:19:23.904671 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"756add6244838c2be85afcde4726595ecd7b69e02660adc403684ace5b7b9f01"} Nov 28 00:19:41 crc kubenswrapper[3556]: I1128 00:19:41.589526 3556 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.092618 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-s722w"] Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.093255 3556 topology_manager.go:215] "Topology Admit Handler" podUID="06a8120b-71ab-4c2b-90cf-17e3d48304db" podNamespace="openshift-image-registry" podName="image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: E1128 00:19:59.093446 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" containerName="kube-multus-additional-cni-plugins" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.093463 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" containerName="kube-multus-additional-cni-plugins" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.093582 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cb5b9b-8a43-4d87-a03f-a64375e394e9" containerName="kube-multus-additional-cni-plugins" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.094060 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.125185 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-s722w"] Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.175915 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xzs\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-kube-api-access-t9xzs\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.175955 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06a8120b-71ab-4c2b-90cf-17e3d48304db-trusted-ca\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.175975 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/06a8120b-71ab-4c2b-90cf-17e3d48304db-ca-trust-extracted\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.176343 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/06a8120b-71ab-4c2b-90cf-17e3d48304db-installation-pull-secrets\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.176409 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-bound-sa-token\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.176434 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-registry-tls\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.176457 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/06a8120b-71ab-4c2b-90cf-17e3d48304db-registry-certificates\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.176484 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.196508 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.277448 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-t9xzs\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-kube-api-access-t9xzs\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.277515 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06a8120b-71ab-4c2b-90cf-17e3d48304db-trusted-ca\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.277557 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/06a8120b-71ab-4c2b-90cf-17e3d48304db-ca-trust-extracted\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.277634 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/06a8120b-71ab-4c2b-90cf-17e3d48304db-installation-pull-secrets\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.277673 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-bound-sa-token\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.277722 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-registry-tls\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.277766 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/06a8120b-71ab-4c2b-90cf-17e3d48304db-registry-certificates\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.278170 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/06a8120b-71ab-4c2b-90cf-17e3d48304db-ca-trust-extracted\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.278775 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/06a8120b-71ab-4c2b-90cf-17e3d48304db-trusted-ca\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.279089 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/06a8120b-71ab-4c2b-90cf-17e3d48304db-registry-certificates\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.283898 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-registry-tls\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.284163 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/06a8120b-71ab-4c2b-90cf-17e3d48304db-installation-pull-secrets\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.295808 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9xzs\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-kube-api-access-t9xzs\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.302421 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/06a8120b-71ab-4c2b-90cf-17e3d48304db-bound-sa-token\") pod \"image-registry-75b7bb6564-s722w\" (UID: \"06a8120b-71ab-4c2b-90cf-17e3d48304db\") " pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.413255 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:19:59 crc kubenswrapper[3556]: I1128 00:19:59.876924 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-75b7bb6564-s722w"] Nov 28 00:19:59 crc kubenswrapper[3556]: W1128 00:19:59.891441 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06a8120b_71ab_4c2b_90cf_17e3d48304db.slice/crio-ba081ce3e3ed608d64a49ab6b01371a14e46dcd9557c9ad2f2c5b83ceb07aff7 WatchSource:0}: Error finding container ba081ce3e3ed608d64a49ab6b01371a14e46dcd9557c9ad2f2c5b83ceb07aff7: Status 404 returned error can't find the container with id ba081ce3e3ed608d64a49ab6b01371a14e46dcd9557c9ad2f2c5b83ceb07aff7 Nov 28 00:20:00 crc kubenswrapper[3556]: I1128 00:20:00.119137 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75b7bb6564-s722w" event={"ID":"06a8120b-71ab-4c2b-90cf-17e3d48304db","Type":"ContainerStarted","Data":"ba081ce3e3ed608d64a49ab6b01371a14e46dcd9557c9ad2f2c5b83ceb07aff7"} Nov 28 00:20:01 crc kubenswrapper[3556]: I1128 00:20:01.127578 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75b7bb6564-s722w" event={"ID":"06a8120b-71ab-4c2b-90cf-17e3d48304db","Type":"ContainerStarted","Data":"f87416123a84952c76de30b8015fb60e5ec10f550e15b19c2ff3b6fb13992a34"} Nov 28 00:20:01 crc kubenswrapper[3556]: I1128 00:20:01.155496 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-image-registry/image-registry-75b7bb6564-s722w" podStartSLOduration=2.155448754 podStartE2EDuration="2.155448754s" podCreationTimestamp="2025-11-28 00:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:20:01.151809833 +0000 UTC m=+462.744041853" watchObservedRunningTime="2025-11-28 00:20:01.155448754 +0000 UTC m=+462.747680774" Nov 28 00:20:02 crc kubenswrapper[3556]: I1128 00:20:02.131797 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:20:14 crc kubenswrapper[3556]: I1128 00:20:14.800099 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Nov 28 00:20:14 crc kubenswrapper[3556]: I1128 00:20:14.800894 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" containerID="cri-o://431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776" gracePeriod=30 Nov 28 00:20:14 crc kubenswrapper[3556]: I1128 00:20:14.831625 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Nov 28 00:20:14 crc kubenswrapper[3556]: I1128 00:20:14.832197 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" containerID="cri-o://8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab" gracePeriod=30 Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.185302 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.190996 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.222659 3556 generic.go:334] "Generic (PLEG): container finished" podID="1a3e81c3-c292-4130-9436-f94062c91efd" containerID="431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776" exitCode=0 Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.222720 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerDied","Data":"431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776"} Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.222761 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" event={"ID":"1a3e81c3-c292-4130-9436-f94062c91efd","Type":"ContainerDied","Data":"a52910f67f00fc78e8c4ae0721ed574e2efcc1389eb2b85421bfae10cfa956ff"} Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.222782 3556 scope.go:117] "RemoveContainer" containerID="431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.222941 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-778975cc4f-x5vcf" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.228836 3556 generic.go:334] "Generic (PLEG): container finished" podID="21d29937-debd-4407-b2b1-d1053cb0f342" containerID="8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab" exitCode=0 Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.228890 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerDied","Data":"8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab"} Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.228924 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" event={"ID":"21d29937-debd-4407-b2b1-d1053cb0f342","Type":"ContainerDied","Data":"8d47d1aee1afff54392a6ee1709e62dbb51c2836725f7e95b73f0c71a3e8fec6"} Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.228982 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.301865 3556 scope.go:117] "RemoveContainer" containerID="431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776" Nov 28 00:20:15 crc kubenswrapper[3556]: E1128 00:20:15.302431 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776\": container with ID starting with 431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776 not found: ID does not exist" containerID="431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.302489 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776"} err="failed to get container status \"431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776\": rpc error: code = NotFound desc = could not find container \"431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776\": container with ID starting with 431a13f1ed7958c282c16a2faca656cff15fbd607beca055d3208b596c86a776 not found: ID does not exist" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.302511 3556 scope.go:117] "RemoveContainer" containerID="8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.320573 3556 scope.go:117] "RemoveContainer" containerID="8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab" Nov 28 00:20:15 crc kubenswrapper[3556]: E1128 00:20:15.320919 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab\": container with ID starting with 8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab not found: ID does not exist" containerID="8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.320955 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab"} err="failed to get container status \"8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab\": rpc error: code = NotFound desc = could not find container \"8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab\": container with ID starting with 8e9d57b027404fee39e4ac9b8acf8d5cf0185e23947e53b543df59d536a7dfab not found: ID does not exist" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.322380 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.322433 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.323243 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.323298 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.323352 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.323411 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.323418 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config" (OuterVolumeSpecName: "config") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.323436 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.323481 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") pod \"1a3e81c3-c292-4130-9436-f94062c91efd\" (UID: \"1a3e81c3-c292-4130-9436-f94062c91efd\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.323517 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") pod \"21d29937-debd-4407-b2b1-d1053cb0f342\" (UID: \"21d29937-debd-4407-b2b1-d1053cb0f342\") " Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.324377 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.324393 3556 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-config\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.324991 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca" (OuterVolumeSpecName: "client-ca") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.325177 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config" (OuterVolumeSpecName: "config") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.325195 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.332894 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr" (OuterVolumeSpecName: "kube-api-access-v7vkr") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "kube-api-access-v7vkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.333661 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4" (OuterVolumeSpecName: "kube-api-access-pkhl4") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "kube-api-access-pkhl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.335640 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a3e81c3-c292-4130-9436-f94062c91efd" (UID: "1a3e81c3-c292-4130-9436-f94062c91efd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.336661 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21d29937-debd-4407-b2b1-d1053cb0f342" (UID: "21d29937-debd-4407-b2b1-d1053cb0f342"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.425895 3556 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d29937-debd-4407-b2b1-d1053cb0f342-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.425926 3556 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.425939 3556 reconciler_common.go:300] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a3e81c3-c292-4130-9436-f94062c91efd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.425950 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v7vkr\" (UniqueName: \"kubernetes.io/projected/21d29937-debd-4407-b2b1-d1053cb0f342-kube-api-access-v7vkr\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.425960 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pkhl4\" (UniqueName: \"kubernetes.io/projected/1a3e81c3-c292-4130-9436-f94062c91efd-kube-api-access-pkhl4\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.425970 3556 reconciler_common.go:300] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.425980 3556 reconciler_common.go:300] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a3e81c3-c292-4130-9436-f94062c91efd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.425990 3556 reconciler_common.go:300] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d29937-debd-4407-b2b1-d1053cb0f342-config\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.555528 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.559246 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-778975cc4f-x5vcf"] Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.564781 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.567217 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776b8b7477-sfpvs"] Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.952261 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-99b58567f-f9vht"] Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.952429 3556 topology_manager.go:215] "Topology Admit Handler" podUID="8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958" podNamespace="openshift-controller-manager" podName="controller-manager-99b58567f-f9vht" Nov 28 00:20:15 crc kubenswrapper[3556]: E1128 00:20:15.952678 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.952707 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Nov 28 00:20:15 crc kubenswrapper[3556]: E1128 00:20:15.952750 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.952769 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.953094 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" containerName="controller-manager" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.953139 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" containerName="route-controller-manager" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.953937 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.957569 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.957589 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.957868 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.961519 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.961527 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.961670 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.968883 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 00:20:15 crc kubenswrapper[3556]: I1128 00:20:15.976231 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99b58567f-f9vht"] Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.033363 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbjq\" (UniqueName: \"kubernetes.io/projected/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-kube-api-access-dkbjq\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.034154 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-config\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.034198 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-proxy-ca-bundles\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.034427 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-client-ca\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.034544 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-serving-cert\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.135846 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbjq\" (UniqueName: \"kubernetes.io/projected/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-kube-api-access-dkbjq\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.135942 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-config\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.135974 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-proxy-ca-bundles\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.136002 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-client-ca\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.136039 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-serving-cert\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.137765 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-proxy-ca-bundles\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.137740 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-client-ca\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.138063 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-config\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.152380 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-serving-cert\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.159911 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbjq\" (UniqueName: \"kubernetes.io/projected/8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958-kube-api-access-dkbjq\") pod \"controller-manager-99b58567f-f9vht\" (UID: \"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958\") " pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.279567 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.746676 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99b58567f-f9vht"] Nov 28 00:20:16 crc kubenswrapper[3556]: W1128 00:20:16.757894 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fabc20d_3cb2_4c79_9ce1_8dd5ce2b4958.slice/crio-01c4cf8c5e6dff14acb7733b7dff345a42cb955323f52f1d36a30ecbc47cd0fa WatchSource:0}: Error finding container 01c4cf8c5e6dff14acb7733b7dff345a42cb955323f52f1d36a30ecbc47cd0fa: Status 404 returned error can't find the container with id 01c4cf8c5e6dff14acb7733b7dff345a42cb955323f52f1d36a30ecbc47cd0fa Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.923547 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3e81c3-c292-4130-9436-f94062c91efd" path="/var/lib/kubelet/pods/1a3e81c3-c292-4130-9436-f94062c91efd/volumes" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.924845 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d29937-debd-4407-b2b1-d1053cb0f342" path="/var/lib/kubelet/pods/21d29937-debd-4407-b2b1-d1053cb0f342/volumes" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.949710 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp"] Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.949904 3556 topology_manager.go:215] "Topology Admit Handler" podUID="efeaa66f-42fc-42bc-bbad-71a5047fb302" podNamespace="openshift-route-controller-manager" podName="route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.950956 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.956054 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.956236 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.956429 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.956388 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.955929 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.958871 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 00:20:16 crc kubenswrapper[3556]: I1128 00:20:16.964062 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp"] Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.047829 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efeaa66f-42fc-42bc-bbad-71a5047fb302-config\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.047887 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efeaa66f-42fc-42bc-bbad-71a5047fb302-client-ca\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.048114 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrnw\" (UniqueName: \"kubernetes.io/projected/efeaa66f-42fc-42bc-bbad-71a5047fb302-kube-api-access-wsrnw\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.048247 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efeaa66f-42fc-42bc-bbad-71a5047fb302-serving-cert\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.149045 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efeaa66f-42fc-42bc-bbad-71a5047fb302-client-ca\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.149134 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrnw\" (UniqueName: \"kubernetes.io/projected/efeaa66f-42fc-42bc-bbad-71a5047fb302-kube-api-access-wsrnw\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.149180 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efeaa66f-42fc-42bc-bbad-71a5047fb302-serving-cert\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.149249 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efeaa66f-42fc-42bc-bbad-71a5047fb302-config\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.151323 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/efeaa66f-42fc-42bc-bbad-71a5047fb302-client-ca\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.151595 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efeaa66f-42fc-42bc-bbad-71a5047fb302-config\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.165050 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efeaa66f-42fc-42bc-bbad-71a5047fb302-serving-cert\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.170476 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrnw\" (UniqueName: \"kubernetes.io/projected/efeaa66f-42fc-42bc-bbad-71a5047fb302-kube-api-access-wsrnw\") pod \"route-controller-manager-cc767df55-mvwlp\" (UID: \"efeaa66f-42fc-42bc-bbad-71a5047fb302\") " pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.243557 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" event={"ID":"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958","Type":"ContainerStarted","Data":"54e9c2632e15468341633f5b2b5e20435b4e3ea617875c9a44d29cd700fe5e75"} Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.243887 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" event={"ID":"8fabc20d-3cb2-4c79-9ce1-8dd5ce2b4958","Type":"ContainerStarted","Data":"01c4cf8c5e6dff14acb7733b7dff345a42cb955323f52f1d36a30ecbc47cd0fa"} Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.246299 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.252093 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.264391 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-99b58567f-f9vht" podStartSLOduration=3.264338882 podStartE2EDuration="3.264338882s" podCreationTimestamp="2025-11-28 00:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:20:17.260342371 +0000 UTC m=+478.852574381" watchObservedRunningTime="2025-11-28 00:20:17.264338882 +0000 UTC m=+478.856570892" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.278630 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:17 crc kubenswrapper[3556]: I1128 00:20:17.771647 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp"] Nov 28 00:20:17 crc kubenswrapper[3556]: W1128 00:20:17.775663 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefeaa66f_42fc_42bc_bbad_71a5047fb302.slice/crio-15a4629603398e37c90a282c401ea7f3e8bc4f2786018fccee6ce7a77ea562ac WatchSource:0}: Error finding container 15a4629603398e37c90a282c401ea7f3e8bc4f2786018fccee6ce7a77ea562ac: Status 404 returned error can't find the container with id 15a4629603398e37c90a282c401ea7f3e8bc4f2786018fccee6ce7a77ea562ac Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.251193 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" event={"ID":"efeaa66f-42fc-42bc-bbad-71a5047fb302","Type":"ContainerStarted","Data":"f5705f085fa860e16cb56b0ed858f7be1125fe5888cae9ac4668a24dae8a9dd7"} Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.251541 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" event={"ID":"efeaa66f-42fc-42bc-bbad-71a5047fb302","Type":"ContainerStarted","Data":"15a4629603398e37c90a282c401ea7f3e8bc4f2786018fccee6ce7a77ea562ac"} Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.270081 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" podStartSLOduration=4.270032613 podStartE2EDuration="4.270032613s" podCreationTimestamp="2025-11-28 00:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:20:18.266919807 +0000 UTC m=+479.859151817" watchObservedRunningTime="2025-11-28 00:20:18.270032613 +0000 UTC m=+479.862264623" Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.693421 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.693476 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.693512 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.693556 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.693601 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:20:18 crc kubenswrapper[3556]: E1128 00:20:18.970265 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe\": container with ID starting with de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe not found: ID does not exist" containerID="de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe" Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.970325 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe" err="rpc error: code = NotFound desc = could not find container \"de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe\": container with ID starting with de330230a01f03a2d68126ab9eeb5198d7000aa6559b4f3461344585212eb3fe not found: ID does not exist" Nov 28 00:20:18 crc kubenswrapper[3556]: E1128 00:20:18.979656 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba\": container with ID starting with 0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba not found: ID does not exist" containerID="0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba" Nov 28 00:20:18 crc kubenswrapper[3556]: I1128 00:20:18.979708 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba" err="rpc error: code = NotFound desc = could not find container \"0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba\": container with ID starting with 0f10a0ff7dcdf058546a57661d593bbd03d3e01cad1ad00d318c0219c343a8ba not found: ID does not exist" Nov 28 00:20:19 crc kubenswrapper[3556]: I1128 00:20:19.254474 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:19 crc kubenswrapper[3556]: I1128 00:20:19.259260 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" Nov 28 00:20:19 crc kubenswrapper[3556]: I1128 00:20:19.419624 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-75b7bb6564-s722w" Nov 28 00:20:19 crc kubenswrapper[3556]: I1128 00:20:19.498060 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Nov 28 00:20:44 crc kubenswrapper[3556]: I1128 00:20:44.640907 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" containerID="cri-o://a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458" gracePeriod=30 Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.082409 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.227822 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.227933 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.227994 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.228108 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.228167 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.228221 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.228280 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.228357 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") pod \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\" (UID: \"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319\") " Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.228746 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.229225 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.229265 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.330402 3556 reconciler_common.go:300] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.330488 3556 reconciler_common.go:300] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.330507 3556 reconciler_common.go:300] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.854739 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.855623 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv" (OuterVolumeSpecName: "kube-api-access-scpwv") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "kube-api-access-scpwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.856722 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.858661 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.863189 3556 generic.go:334] "Generic (PLEG): container finished" podID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerID="a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458" exitCode=0 Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.863232 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerDied","Data":"a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458"} Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.863245 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.863274 3556 scope.go:117] "RemoveContainer" containerID="a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.863259 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-75779c45fd-v2j2v" event={"ID":"f9a7bc46-2f44-4aff-9cb5-97c97a4a8319","Type":"ContainerDied","Data":"c5d080b5c1c38e4e193c5f121b1fb946f0e797ddfc721176f38639d89b2b9bf5"} Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.881294 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97" (OuterVolumeSpecName: "registry-storage") pod "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" (UID: "f9a7bc46-2f44-4aff-9cb5-97c97a4a8319"). InnerVolumeSpecName "pvc-f5d86efc-9248-4b55-9b8b-23cf63fe9e97". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.906982 3556 scope.go:117] "RemoveContainer" containerID="a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458" Nov 28 00:20:45 crc kubenswrapper[3556]: E1128 00:20:45.907634 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458\": container with ID starting with a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458 not found: ID does not exist" containerID="a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.907716 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458"} err="failed to get container status \"a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458\": rpc error: code = NotFound desc = could not find container \"a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458\": container with ID starting with a352b9753f61a7d928e038bb3784ad0554c0f56216211a03e4cedacde92b8458 not found: ID does not exist" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.940697 3556 reconciler_common.go:300] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.940772 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-scpwv\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-kube-api-access-scpwv\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.940796 3556 reconciler_common.go:300] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:45 crc kubenswrapper[3556]: I1128 00:20:45.940815 3556 reconciler_common.go:300] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:20:46 crc kubenswrapper[3556]: I1128 00:20:46.203713 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Nov 28 00:20:46 crc kubenswrapper[3556]: I1128 00:20:46.207641 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-75779c45fd-v2j2v"] Nov 28 00:20:46 crc kubenswrapper[3556]: I1128 00:20:46.924721 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" path="/var/lib/kubelet/pods/f9a7bc46-2f44-4aff-9cb5-97c97a4a8319/volumes" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.460931 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.461480 3556 topology_manager.go:215] "Topology Admit Handler" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" podNamespace="openshift-kube-apiserver" podName="installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: E1128 00:20:51.461685 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.461704 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.461879 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a7bc46-2f44-4aff-9cb5-97c97a4a8319" containerName="registry" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.462436 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.464302 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.467541 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-4kgh8" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.471899 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.621236 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-kubelet-dir\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.621298 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b687ee2-bb13-48d4-be0b-64b3788c072f-kube-api-access\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.621361 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-var-lock\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.722595 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-var-lock\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.722700 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-kubelet-dir\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.722786 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b687ee2-bb13-48d4-be0b-64b3788c072f-kube-api-access\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.722811 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-var-lock\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.722944 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-kubelet-dir\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.755586 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b687ee2-bb13-48d4-be0b-64b3788c072f-kube-api-access\") pod \"installer-13-crc\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:51 crc kubenswrapper[3556]: I1128 00:20:51.789680 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:20:52 crc kubenswrapper[3556]: I1128 00:20:52.164846 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-13-crc"] Nov 28 00:20:52 crc kubenswrapper[3556]: W1128 00:20:52.177557 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4b687ee2_bb13_48d4_be0b_64b3788c072f.slice/crio-d37c7473387470b01c0abddee3403083050c34e720a81293e8a12b58837864f6 WatchSource:0}: Error finding container d37c7473387470b01c0abddee3403083050c34e720a81293e8a12b58837864f6: Status 404 returned error can't find the container with id d37c7473387470b01c0abddee3403083050c34e720a81293e8a12b58837864f6 Nov 28 00:20:52 crc kubenswrapper[3556]: I1128 00:20:52.905473 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"4b687ee2-bb13-48d4-be0b-64b3788c072f","Type":"ContainerStarted","Data":"d37c7473387470b01c0abddee3403083050c34e720a81293e8a12b58837864f6"} Nov 28 00:20:53 crc kubenswrapper[3556]: I1128 00:20:53.912234 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"4b687ee2-bb13-48d4-be0b-64b3788c072f","Type":"ContainerStarted","Data":"169a0fab70fb9302ff58c4ce6ff26b14e96fe4263464883848495824effdba77"} Nov 28 00:20:53 crc kubenswrapper[3556]: I1128 00:20:53.931216 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-13-crc" podStartSLOduration=2.930959796 podStartE2EDuration="2.930959796s" podCreationTimestamp="2025-11-28 00:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:20:53.924172918 +0000 UTC m=+515.516404948" watchObservedRunningTime="2025-11-28 00:20:53.930959796 +0000 UTC m=+515.523191846" Nov 28 00:21:02 crc kubenswrapper[3556]: I1128 00:21:02.076305 3556 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Nov 28 00:21:02 crc kubenswrapper[3556]: I1128 00:21:02.915319 3556 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Nov 28 00:21:09 crc kubenswrapper[3556]: I1128 00:21:09.787464 3556 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Nov 28 00:21:09 crc kubenswrapper[3556]: I1128 00:21:09.788061 3556 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 00:21:09 crc kubenswrapper[3556]: I1128 00:21:09.816691 3556 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Nov 28 00:21:09 crc kubenswrapper[3556]: I1128 00:21:09.857797 3556 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Nov 28 00:21:11 crc kubenswrapper[3556]: I1128 00:21:11.014391 3556 dynamic_cafile_content.go:211] "Failed to remove file watch, it may have been deleted" file="/etc/kubernetes/kubelet-ca.crt" err="fsnotify: can't remove non-existent watch: /etc/kubernetes/kubelet-ca.crt" Nov 28 00:21:18 crc kubenswrapper[3556]: I1128 00:21:18.694654 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:21:18 crc kubenswrapper[3556]: I1128 00:21:18.695377 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:21:18 crc kubenswrapper[3556]: I1128 00:21:18.695416 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:21:18 crc kubenswrapper[3556]: I1128 00:21:18.695480 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:21:18 crc kubenswrapper[3556]: I1128 00:21:18.695533 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:21:19 crc kubenswrapper[3556]: E1128 00:21:19.007795 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53\": container with ID starting with dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53 not found: ID does not exist" containerID="dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53" Nov 28 00:21:19 crc kubenswrapper[3556]: I1128 00:21:19.008200 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53" err="rpc error: code = NotFound desc = could not find container \"dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53\": container with ID starting with dc62e76377abe761c91fc70b8c010469ee052b1cdb26156cc98186814ab9ea53 not found: ID does not exist" Nov 28 00:21:30 crc kubenswrapper[3556]: E1128 00:21:30.915141 3556 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.921722 3556 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.921950 3556 topology_manager.go:215] "Topology Admit Handler" podUID="7dae59545f22b3fb679a7fbf878a6379" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-startup-monitor-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.922686 3556 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.922850 3556 kubelet.go:2429] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.922984 3556 topology_manager.go:215] "Topology Admit Handler" podUID="7f3419c3ca30b18b78e8dd2488b00489" podNamespace="openshift-kube-apiserver" podName="kube-apiserver-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.923229 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" containerID="cri-o://6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de" gracePeriod=15 Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.923280 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c" gracePeriod=15 Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.923337 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" containerID="cri-o://87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13" gracePeriod=15 Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.923410 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d" gracePeriod=15 Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.922896 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.923573 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2" gracePeriod=15 Nov 28 00:21:30 crc kubenswrapper[3556]: E1128 00:21:30.923735 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.923865 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Nov 28 00:21:30 crc kubenswrapper[3556]: E1128 00:21:30.923987 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.924142 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Nov 28 00:21:30 crc kubenswrapper[3556]: E1128 00:21:30.924264 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.924380 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 00:21:30 crc kubenswrapper[3556]: E1128 00:21:30.924503 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.926072 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Nov 28 00:21:30 crc kubenswrapper[3556]: E1128 00:21:30.926218 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="setup" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.926322 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="setup" Nov 28 00:21:30 crc kubenswrapper[3556]: E1128 00:21:30.926426 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.926526 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.926835 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-insecure-readyz" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.927102 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-check-endpoints" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.927226 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-syncer" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.927412 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.927522 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae85115fdc231b4002b57317b41a6400" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.931647 3556 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="ae85115fdc231b4002b57317b41a6400" podUID="7f3419c3ca30b18b78e8dd2488b00489" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.997283 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.997667 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.997729 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.998116 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.998152 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.998191 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.998350 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:30 crc kubenswrapper[3556]: I1128 00:21:30.998409 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.016894 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.099781 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.099832 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.099864 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.099885 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.099908 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.099937 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.099952 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.099987 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.100062 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.100084 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.100094 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.100111 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.100134 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.100067 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.100137 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7f3419c3ca30b18b78e8dd2488b00489-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"7f3419c3ca30b18b78e8dd2488b00489\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.100282 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.116024 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.116879 3556 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2" exitCode=0 Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.116930 3556 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d" exitCode=0 Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.116954 3556 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c" exitCode=0 Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.116975 3556 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13" exitCode=2 Nov 28 00:21:31 crc kubenswrapper[3556]: I1128 00:21:31.293451 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:21:31 crc kubenswrapper[3556]: E1128 00:21:31.323637 3556 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c03c34a50f323 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7dae59545f22b3fb679a7fbf878a6379,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:21:31.322716963 +0000 UTC m=+552.914948953,LastTimestamp:2025-11-28 00:21:31.322716963 +0000 UTC m=+552.914948953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:21:32 crc kubenswrapper[3556]: I1128 00:21:32.123983 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7dae59545f22b3fb679a7fbf878a6379","Type":"ContainerStarted","Data":"d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429"} Nov 28 00:21:32 crc kubenswrapper[3556]: I1128 00:21:32.124647 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"7dae59545f22b3fb679a7fbf878a6379","Type":"ContainerStarted","Data":"649cfca8ae6e0f6a91815e52c043c77897aa263b7274a4dd12071f5f65dd539c"} Nov 28 00:21:32 crc kubenswrapper[3556]: I1128 00:21:32.125157 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.355810 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.357364 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.358108 3556 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.358738 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.428446 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.428533 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.428554 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") pod \"ae85115fdc231b4002b57317b41a6400\" (UID: \"ae85115fdc231b4002b57317b41a6400\") " Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.428556 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.428624 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.428723 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ae85115fdc231b4002b57317b41a6400" (UID: "ae85115fdc231b4002b57317b41a6400"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.428731 3556 reconciler_common.go:300] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.428747 3556 reconciler_common.go:300] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 00:21:33 crc kubenswrapper[3556]: I1128 00:21:33.529988 3556 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ae85115fdc231b4002b57317b41a6400-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.136665 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_ae85115fdc231b4002b57317b41a6400/kube-apiserver-cert-syncer/2.log" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.137808 3556 generic.go:334] "Generic (PLEG): container finished" podID="ae85115fdc231b4002b57317b41a6400" containerID="6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de" exitCode=0 Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.137858 3556 scope.go:117] "RemoveContainer" containerID="c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.137882 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.152988 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.153374 3556 status_manager.go:853] "Failed to get status for pod" podUID="ae85115fdc231b4002b57317b41a6400" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.168779 3556 scope.go:117] "RemoveContainer" containerID="eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.188459 3556 scope.go:117] "RemoveContainer" containerID="a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.207727 3556 scope.go:117] "RemoveContainer" containerID="87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.226853 3556 scope.go:117] "RemoveContainer" containerID="6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.250299 3556 scope.go:117] "RemoveContainer" containerID="238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.287621 3556 scope.go:117] "RemoveContainer" containerID="c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2" Nov 28 00:21:34 crc kubenswrapper[3556]: E1128 00:21:34.288800 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2\": container with ID starting with c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2 not found: ID does not exist" containerID="c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.288851 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2"} err="failed to get container status \"c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2\": rpc error: code = NotFound desc = could not find container \"c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2\": container with ID starting with c6f11f15bded007dda9a99f5b5ff7ede8f35287e06562003c6031a9a36c25da2 not found: ID does not exist" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.288861 3556 scope.go:117] "RemoveContainer" containerID="eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d" Nov 28 00:21:34 crc kubenswrapper[3556]: E1128 00:21:34.289427 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d\": container with ID starting with eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d not found: ID does not exist" containerID="eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.289487 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d"} err="failed to get container status \"eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d\": rpc error: code = NotFound desc = could not find container \"eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d\": container with ID starting with eaeb6c15b86168d5b108efb713480fee79eebc09cb1b0fe702109125bd71006d not found: ID does not exist" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.289503 3556 scope.go:117] "RemoveContainer" containerID="a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c" Nov 28 00:21:34 crc kubenswrapper[3556]: E1128 00:21:34.291832 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c\": container with ID starting with a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c not found: ID does not exist" containerID="a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.291896 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c"} err="failed to get container status \"a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c\": rpc error: code = NotFound desc = could not find container \"a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c\": container with ID starting with a0716622bdbaacc36694ebf908ccc0c768eb31880b56a4ef9e6e3626821fdf2c not found: ID does not exist" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.291912 3556 scope.go:117] "RemoveContainer" containerID="87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13" Nov 28 00:21:34 crc kubenswrapper[3556]: E1128 00:21:34.293930 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13\": container with ID starting with 87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13 not found: ID does not exist" containerID="87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.293972 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13"} err="failed to get container status \"87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13\": rpc error: code = NotFound desc = could not find container \"87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13\": container with ID starting with 87a121203ec5ee4d33b6a3c50d08d60e127bc39893d222f2f8403435236fdc13 not found: ID does not exist" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.293982 3556 scope.go:117] "RemoveContainer" containerID="6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de" Nov 28 00:21:34 crc kubenswrapper[3556]: E1128 00:21:34.295224 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de\": container with ID starting with 6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de not found: ID does not exist" containerID="6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.295288 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de"} err="failed to get container status \"6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de\": rpc error: code = NotFound desc = could not find container \"6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de\": container with ID starting with 6ac59e38abb2a44bb568d0d697852bd13ea045fc71fa997c24c654a1825c12de not found: ID does not exist" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.295298 3556 scope.go:117] "RemoveContainer" containerID="238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8" Nov 28 00:21:34 crc kubenswrapper[3556]: E1128 00:21:34.295571 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8\": container with ID starting with 238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8 not found: ID does not exist" containerID="238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.295592 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8"} err="failed to get container status \"238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8\": rpc error: code = NotFound desc = could not find container \"238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8\": container with ID starting with 238f834584b242d9fc14ae69c7bc8192a61aaa4054740de6bead2a6ff19b00b8 not found: ID does not exist" Nov 28 00:21:34 crc kubenswrapper[3556]: I1128 00:21:34.917828 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae85115fdc231b4002b57317b41a6400" path="/var/lib/kubelet/pods/ae85115fdc231b4002b57317b41a6400/volumes" Nov 28 00:21:36 crc kubenswrapper[3556]: I1128 00:21:36.152241 3556 generic.go:334] "Generic (PLEG): container finished" podID="4b687ee2-bb13-48d4-be0b-64b3788c072f" containerID="169a0fab70fb9302ff58c4ce6ff26b14e96fe4263464883848495824effdba77" exitCode=0 Nov 28 00:21:36 crc kubenswrapper[3556]: I1128 00:21:36.152289 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"4b687ee2-bb13-48d4-be0b-64b3788c072f","Type":"ContainerDied","Data":"169a0fab70fb9302ff58c4ce6ff26b14e96fe4263464883848495824effdba77"} Nov 28 00:21:36 crc kubenswrapper[3556]: I1128 00:21:36.153001 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:36 crc kubenswrapper[3556]: I1128 00:21:36.153434 3556 status_manager.go:853] "Failed to get status for pod" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.398442 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.399689 3556 status_manager.go:853] "Failed to get status for pod" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.399913 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.469837 3556 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?resourceVersion=0&timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.470270 3556 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.470462 3556 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.470631 3556 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.470819 3556 kubelet_node_status.go:594] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.470836 3556 kubelet_node_status.go:581] "Unable to update node status" err="update node status exceeds retry count" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.476247 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-kubelet-dir\") pod \"4b687ee2-bb13-48d4-be0b-64b3788c072f\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.476295 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b687ee2-bb13-48d4-be0b-64b3788c072f-kube-api-access\") pod \"4b687ee2-bb13-48d4-be0b-64b3788c072f\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.476337 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-var-lock\") pod \"4b687ee2-bb13-48d4-be0b-64b3788c072f\" (UID: \"4b687ee2-bb13-48d4-be0b-64b3788c072f\") " Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.476567 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-var-lock" (OuterVolumeSpecName: "var-lock") pod "4b687ee2-bb13-48d4-be0b-64b3788c072f" (UID: "4b687ee2-bb13-48d4-be0b-64b3788c072f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.476590 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4b687ee2-bb13-48d4-be0b-64b3788c072f" (UID: "4b687ee2-bb13-48d4-be0b-64b3788c072f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.481938 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b687ee2-bb13-48d4-be0b-64b3788c072f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4b687ee2-bb13-48d4-be0b-64b3788c072f" (UID: "4b687ee2-bb13-48d4-be0b-64b3788c072f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.577577 3556 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.577634 3556 reconciler_common.go:300] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b687ee2-bb13-48d4-be0b-64b3788c072f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.577656 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b687ee2-bb13-48d4-be0b-64b3788c072f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.755620 3556 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.756253 3556 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.756690 3556 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.756955 3556 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.757231 3556 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:37 crc kubenswrapper[3556]: I1128 00:21:37.757270 3556 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.757753 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="200ms" Nov 28 00:21:37 crc kubenswrapper[3556]: E1128 00:21:37.959189 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="400ms" Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.164403 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-13-crc" event={"ID":"4b687ee2-bb13-48d4-be0b-64b3788c072f","Type":"ContainerDied","Data":"d37c7473387470b01c0abddee3403083050c34e720a81293e8a12b58837864f6"} Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.164466 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d37c7473387470b01c0abddee3403083050c34e720a81293e8a12b58837864f6" Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.164483 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-13-crc" Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.165609 3556 status_manager.go:853] "Failed to get status for pod" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.166172 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.189405 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.190050 3556 status_manager.go:853] "Failed to get status for pod" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:38 crc kubenswrapper[3556]: E1128 00:21:38.360970 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="800ms" Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.921543 3556 status_manager.go:853] "Failed to get status for pod" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:38 crc kubenswrapper[3556]: I1128 00:21:38.922262 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:39 crc kubenswrapper[3556]: E1128 00:21:39.161803 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="1.6s" Nov 28 00:21:39 crc kubenswrapper[3556]: E1128 00:21:39.933179 3556 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.223:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c03c34a50f323 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:7dae59545f22b3fb679a7fbf878a6379,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:2aa3d89686e4084a0c98a021b05c0ce9e83e25ececba894f79964c55d4693f69\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 00:21:31.322716963 +0000 UTC m=+552.914948953,LastTimestamp:2025-11-28 00:21:31.322716963 +0000 UTC m=+552.914948953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 00:21:40 crc kubenswrapper[3556]: E1128 00:21:40.763117 3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.223:6443: connect: connection refused" interval="3.2s" Nov 28 00:21:41 crc kubenswrapper[3556]: I1128 00:21:41.913177 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:41 crc kubenswrapper[3556]: I1128 00:21:41.915351 3556 status_manager.go:853] "Failed to get status for pod" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:41 crc kubenswrapper[3556]: I1128 00:21:41.915734 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:41 crc kubenswrapper[3556]: I1128 00:21:41.931668 3556 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:41 crc kubenswrapper[3556]: I1128 00:21:41.931696 3556 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:41 crc kubenswrapper[3556]: E1128 00:21:41.932199 3556 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:41 crc kubenswrapper[3556]: I1128 00:21:41.932825 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:41 crc kubenswrapper[3556]: W1128 00:21:41.957154 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f3419c3ca30b18b78e8dd2488b00489.slice/crio-b5de44aaaf41f50ebb3cffaf8a295e929731af5723e962e9cdbde11d55df1c54 WatchSource:0}: Error finding container b5de44aaaf41f50ebb3cffaf8a295e929731af5723e962e9cdbde11d55df1c54: Status 404 returned error can't find the container with id b5de44aaaf41f50ebb3cffaf8a295e929731af5723e962e9cdbde11d55df1c54 Nov 28 00:21:42 crc kubenswrapper[3556]: I1128 00:21:42.185262 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"b5de44aaaf41f50ebb3cffaf8a295e929731af5723e962e9cdbde11d55df1c54"} Nov 28 00:21:43 crc kubenswrapper[3556]: I1128 00:21:43.193900 3556 generic.go:334] "Generic (PLEG): container finished" podID="7f3419c3ca30b18b78e8dd2488b00489" containerID="0dac9b0a60c7161e75ed4c978d3e916997a4a4b5a6fb33b2c68687733de0e677" exitCode=0 Nov 28 00:21:43 crc kubenswrapper[3556]: I1128 00:21:43.193998 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerDied","Data":"0dac9b0a60c7161e75ed4c978d3e916997a4a4b5a6fb33b2c68687733de0e677"} Nov 28 00:21:43 crc kubenswrapper[3556]: I1128 00:21:43.194529 3556 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:43 crc kubenswrapper[3556]: I1128 00:21:43.194562 3556 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:43 crc kubenswrapper[3556]: I1128 00:21:43.195098 3556 status_manager.go:853] "Failed to get status for pod" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" pod="openshift-kube-apiserver/installer-13-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-13-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:43 crc kubenswrapper[3556]: E1128 00:21:43.195311 3556 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:43 crc kubenswrapper[3556]: I1128 00:21:43.195801 3556 status_manager.go:853] "Failed to get status for pod" podUID="7dae59545f22b3fb679a7fbf878a6379" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.223:6443: connect: connection refused" Nov 28 00:21:44 crc kubenswrapper[3556]: I1128 00:21:44.207213 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"5817edc987d2213c7f7105a6a35113a94903b752cd4c9efce1bc1c3110863216"} Nov 28 00:21:44 crc kubenswrapper[3556]: I1128 00:21:44.207521 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"04a3fb6b905074b96a43a9b062fffd609bf25c31bee68ae5142f53ef4a9c0486"} Nov 28 00:21:44 crc kubenswrapper[3556]: I1128 00:21:44.207536 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"eb5f4b87b16c4509d976c8d872c7f13a4801edb6349feccb9941ffbbafbce625"} Nov 28 00:21:45 crc kubenswrapper[3556]: I1128 00:21:45.215333 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"d3fa5fbc043b8d0fc98461656503e0a8c98f8c1dbfc7e3c5ed44fd9e40f160e6"} Nov 28 00:21:45 crc kubenswrapper[3556]: I1128 00:21:45.215575 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"7f3419c3ca30b18b78e8dd2488b00489","Type":"ContainerStarted","Data":"c9b81338ff312c72e4b9d924bcac39442f2b6c06538c509c1c2bc13ebe452fe2"} Nov 28 00:21:45 crc kubenswrapper[3556]: I1128 00:21:45.215845 3556 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:45 crc kubenswrapper[3556]: I1128 00:21:45.215868 3556 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:45 crc kubenswrapper[3556]: I1128 00:21:45.219098 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/2.log" Nov 28 00:21:45 crc kubenswrapper[3556]: I1128 00:21:45.219172 3556 generic.go:334] "Generic (PLEG): container finished" podID="bd6a3a59e513625ca0ae3724df2686bc" containerID="00c4fd2ed360e13891c41dd4a8e389d89e9453542b13dde1c17f926f7ba2d74c" exitCode=1 Nov 28 00:21:45 crc kubenswrapper[3556]: I1128 00:21:45.219206 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerDied","Data":"00c4fd2ed360e13891c41dd4a8e389d89e9453542b13dde1c17f926f7ba2d74c"} Nov 28 00:21:45 crc kubenswrapper[3556]: I1128 00:21:45.219748 3556 scope.go:117] "RemoveContainer" containerID="00c4fd2ed360e13891c41dd4a8e389d89e9453542b13dde1c17f926f7ba2d74c" Nov 28 00:21:46 crc kubenswrapper[3556]: I1128 00:21:46.237653 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_bd6a3a59e513625ca0ae3724df2686bc/kube-controller-manager/2.log" Nov 28 00:21:46 crc kubenswrapper[3556]: I1128 00:21:46.237722 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"bd6a3a59e513625ca0ae3724df2686bc","Type":"ContainerStarted","Data":"26141771579ae51871c7f6f5ba0231398ee8f18ab4e549fc435b822bc13ebc7d"} Nov 28 00:21:46 crc kubenswrapper[3556]: I1128 00:21:46.933824 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:46 crc kubenswrapper[3556]: I1128 00:21:46.934359 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:46 crc kubenswrapper[3556]: I1128 00:21:46.941560 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:47 crc kubenswrapper[3556]: I1128 00:21:47.504788 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:21:50 crc kubenswrapper[3556]: I1128 00:21:50.276827 3556 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:50 crc kubenswrapper[3556]: I1128 00:21:50.339892 3556 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="7f3419c3ca30b18b78e8dd2488b00489" podUID="d3c5b0b2-8ee1-49af-86bc-4e86b5c8fd05" Nov 28 00:21:51 crc kubenswrapper[3556]: I1128 00:21:51.259546 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:21:51 crc kubenswrapper[3556]: I1128 00:21:51.259717 3556 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:51 crc kubenswrapper[3556]: I1128 00:21:51.259747 3556 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:51 crc kubenswrapper[3556]: I1128 00:21:51.263329 3556 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="7f3419c3ca30b18b78e8dd2488b00489" podUID="d3c5b0b2-8ee1-49af-86bc-4e86b5c8fd05" Nov 28 00:21:52 crc kubenswrapper[3556]: I1128 00:21:52.227573 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:21:52 crc kubenswrapper[3556]: I1128 00:21:52.233060 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:21:52 crc kubenswrapper[3556]: I1128 00:21:52.264082 3556 kubelet.go:1917] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:52 crc kubenswrapper[3556]: I1128 00:21:52.264389 3556 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d1b73e61-d8d2-4892-8a19-005929c9d4e1" Nov 28 00:21:52 crc kubenswrapper[3556]: I1128 00:21:52.267536 3556 status_manager.go:863] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="7f3419c3ca30b18b78e8dd2488b00489" podUID="d3c5b0b2-8ee1-49af-86bc-4e86b5c8fd05" Nov 28 00:21:52 crc kubenswrapper[3556]: I1128 00:21:52.664643 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:21:52 crc kubenswrapper[3556]: I1128 00:21:52.664775 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:21:56 crc kubenswrapper[3556]: I1128 00:21:56.946043 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 00:21:57 crc kubenswrapper[3556]: I1128 00:21:57.509003 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 00:21:59 crc kubenswrapper[3556]: I1128 00:21:59.831377 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 00:22:01 crc kubenswrapper[3556]: I1128 00:22:01.858101 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 00:22:01 crc kubenswrapper[3556]: I1128 00:22:01.984897 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 00:22:02 crc kubenswrapper[3556]: I1128 00:22:02.341532 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 00:22:02 crc kubenswrapper[3556]: I1128 00:22:02.403677 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 00:22:02 crc kubenswrapper[3556]: I1128 00:22:02.936421 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.001345 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.090809 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.273348 3556 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.276428 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=33.27638403 podStartE2EDuration="33.27638403s" podCreationTimestamp="2025-11-28 00:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:21:50.314307895 +0000 UTC m=+571.906539885" watchObservedRunningTime="2025-11-28 00:22:03.27638403 +0000 UTC m=+584.868616030" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.278044 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.278088 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.281836 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.296151 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.296116019 podStartE2EDuration="13.296116019s" podCreationTimestamp="2025-11-28 00:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:22:03.29473223 +0000 UTC m=+584.886964230" watchObservedRunningTime="2025-11-28 00:22:03.296116019 +0000 UTC m=+584.888348019" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.322770 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.344561 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.427289 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.562558 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.617473 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.624270 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.648361 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 00:22:03 crc kubenswrapper[3556]: I1128 00:22:03.761840 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 00:22:04 crc kubenswrapper[3556]: I1128 00:22:04.055288 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 00:22:04 crc kubenswrapper[3556]: I1128 00:22:04.154592 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 00:22:04 crc kubenswrapper[3556]: I1128 00:22:04.674087 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 00:22:04 crc kubenswrapper[3556]: I1128 00:22:04.761963 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 00:22:04 crc kubenswrapper[3556]: I1128 00:22:04.816489 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.033782 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.103471 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.103616 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.483753 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.544593 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.589945 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.595451 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.777254 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.824207 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 00:22:05 crc kubenswrapper[3556]: I1128 00:22:05.826545 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.050832 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.106819 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.172809 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.173591 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.188534 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.333117 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.343468 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.350957 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.351448 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.384029 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.386710 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.460763 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.506847 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-sv888" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.544514 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.727506 3556 reflector.go:351] Caches populated for *v1.CSIDriver from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.770401 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.805995 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.845468 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.874044 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.913523 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.958218 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.993541 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 00:22:06 crc kubenswrapper[3556]: I1128 00:22:06.998652 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.032288 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.035899 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.036319 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" containerID="cri-o://f667500e31bbd20e18020f3feda9c5fcb95413c4c60f5ae6b409e073c784b3a5" gracePeriod=30 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.036424 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" containerID="cri-o://46a8560a393a5439aed7b64a6b5a18f76e9777704ab9f4b63d60bc801f21cb8a" gracePeriod=30 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.036454 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" containerID="cri-o://add4c854492fb92ad3dfe4f839c8b265eb256f8ee4a5541e1ffbd5863baf61ef" gracePeriod=30 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.036483 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" containerID="cri-o://4f2242c62043fe6b5b8237b1f7367052a86f5f4d37ec86376ad68540f41166b6" gracePeriod=30 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.036615 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" containerID="cri-o://0e8abae46f875f61a9baba43204ffb75d748b30121e4cc89d5d3403178aaa207" gracePeriod=30 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.036748 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2f32e2413540f8b606bace46011915b3f4345f8091da03e50af2414bd037a501" gracePeriod=30 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.036615 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" containerID="cri-o://1f0bc12aff24220a56c1a2424f5c5a776edc66bf8174b52fcc5b43743a6f46d3" gracePeriod=30 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.096907 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.100231 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.140316 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" containerID="cri-o://a592f23f00130a8b85c7f8ff874d278a6eafb49f164470cc714b0b3cb3f14565" gracePeriod=30 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.162548 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-dwn4s" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.175801 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.220888 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.233245 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.236089 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.236232 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6sd5l" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.395461 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.400257 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/7.log" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.400330 3556 generic.go:334] "Generic (PLEG): container finished" podID="475321a1-8b7e-4033-8f72-b05a8b377347" containerID="b203e8ed09c9350b236814135962bdc19666470cae6146b3024fa04966e01b50" exitCode=2 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.400438 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerDied","Data":"b203e8ed09c9350b236814135962bdc19666470cae6146b3024fa04966e01b50"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.400503 3556 scope.go:117] "RemoveContainer" containerID="90dd7dbcf1699d6c2dd098e8bad21d98d61147b5b5812093844f54c0f01e65f5" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.401430 3556 scope.go:117] "RemoveContainer" containerID="b203e8ed09c9350b236814135962bdc19666470cae6146b3024fa04966e01b50" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.402159 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.411143 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-acl-logging/1.log" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.411586 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-controller/1.log" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413831 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="a592f23f00130a8b85c7f8ff874d278a6eafb49f164470cc714b0b3cb3f14565" exitCode=0 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413872 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="0e8abae46f875f61a9baba43204ffb75d748b30121e4cc89d5d3403178aaa207" exitCode=0 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413889 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="46a8560a393a5439aed7b64a6b5a18f76e9777704ab9f4b63d60bc801f21cb8a" exitCode=0 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413904 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="1f0bc12aff24220a56c1a2424f5c5a776edc66bf8174b52fcc5b43743a6f46d3" exitCode=0 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413910 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"a592f23f00130a8b85c7f8ff874d278a6eafb49f164470cc714b0b3cb3f14565"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413920 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="2f32e2413540f8b606bace46011915b3f4345f8091da03e50af2414bd037a501" exitCode=0 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413939 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="4f2242c62043fe6b5b8237b1f7367052a86f5f4d37ec86376ad68540f41166b6" exitCode=0 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413948 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"0e8abae46f875f61a9baba43204ffb75d748b30121e4cc89d5d3403178aaa207"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413955 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="add4c854492fb92ad3dfe4f839c8b265eb256f8ee4a5541e1ffbd5863baf61ef" exitCode=143 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413964 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"46a8560a393a5439aed7b64a6b5a18f76e9777704ab9f4b63d60bc801f21cb8a"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413990 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"1f0bc12aff24220a56c1a2424f5c5a776edc66bf8174b52fcc5b43743a6f46d3"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.413973 3556 generic.go:334] "Generic (PLEG): container finished" podID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerID="f667500e31bbd20e18020f3feda9c5fcb95413c4c60f5ae6b409e073c784b3a5" exitCode=143 Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.414026 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"2f32e2413540f8b606bace46011915b3f4345f8091da03e50af2414bd037a501"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.414044 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"4f2242c62043fe6b5b8237b1f7367052a86f5f4d37ec86376ad68540f41166b6"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.414058 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"add4c854492fb92ad3dfe4f839c8b265eb256f8ee4a5541e1ffbd5863baf61ef"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.414075 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"f667500e31bbd20e18020f3feda9c5fcb95413c4c60f5ae6b409e073c784b3a5"} Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.420231 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.425369 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.455440 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-acl-logging/1.log" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.455928 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-controller/1.log" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.456866 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.482058 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.488921 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489055 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log" (OuterVolumeSpecName: "node-log") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489120 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489142 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489174 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489209 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489232 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489254 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489402 3556 reconciler_common.go:300] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489677 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489696 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash" (OuterVolumeSpecName: "host-slash") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.489712 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.491305 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket" (OuterVolumeSpecName: "log-socket") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.492270 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.504582 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520217 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngf94"] Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520385 3556 topology_manager.go:215] "Topology Admit Handler" podUID="11899088-c8cf-4bb7-8a2e-e0137d6546e2" podNamespace="openshift-ovn-kubernetes" podName="ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520564 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520578 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520590 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520596 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520610 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520616 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520626 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520635 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520646 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520654 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520665 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" containerName="installer" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520671 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" containerName="installer" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520683 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kubecfg-setup" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520690 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kubecfg-setup" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520699 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520706 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520718 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520725 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Nov 28 00:22:07 crc kubenswrapper[3556]: E1128 00:22:07.520735 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520741 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520850 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="northd" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520859 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="sbdb" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520867 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-node" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520876 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="nbdb" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520888 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-controller" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520897 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovnkube-controller" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520906 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520917 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b687ee2-bb13-48d4-be0b-64b3788c072f" containerName="installer" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.520943 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" containerName="ovn-acl-logging" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.523289 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.525661 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-jpwlq" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590452 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590521 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590551 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590585 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590613 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590610 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590637 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590669 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590704 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590734 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590849 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590892 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590930 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590956 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.590986 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591036 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") pod \"3e19f9e8-9a37-4ca8-9790-c219750ab482\" (UID: \"3e19f9e8-9a37-4ca8-9790-c219750ab482\") " Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591041 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591071 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591185 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591157 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591263 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovnkube-script-lib\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591305 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-slash\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591338 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-log-socket\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591371 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-node-log\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591389 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591409 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591485 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-var-lib-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591570 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-etc-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591593 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591616 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-cni-netd\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591664 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591786 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-run-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591844 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-systemd-units\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591878 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovnkube-config\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591923 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-run-netns\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591963 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-run-ovn\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.591997 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-env-overrides\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592070 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r596m\" (UniqueName: \"kubernetes.io/projected/11899088-c8cf-4bb7-8a2e-e0137d6546e2-kube-api-access-r596m\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592218 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592269 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-cni-bin\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592330 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-kubelet\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592388 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovn-node-metrics-cert\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592482 3556 reconciler_common.go:300] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592505 3556 reconciler_common.go:300] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592522 3556 reconciler_common.go:300] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592537 3556 reconciler_common.go:300] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592566 3556 reconciler_common.go:300] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592580 3556 reconciler_common.go:300] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592595 3556 reconciler_common.go:300] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3e19f9e8-9a37-4ca8-9790-c219750ab482-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592619 3556 reconciler_common.go:300] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592635 3556 reconciler_common.go:300] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592647 3556 reconciler_common.go:300] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592657 3556 reconciler_common.go:300] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592667 3556 reconciler_common.go:300] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592682 3556 reconciler_common.go:300] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592698 3556 reconciler_common.go:300] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592709 3556 reconciler_common.go:300] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592721 3556 reconciler_common.go:300] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.592731 3556 reconciler_common.go:300] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3e19f9e8-9a37-4ca8-9790-c219750ab482-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.593590 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495" (OuterVolumeSpecName: "kube-api-access-f9495") pod "3e19f9e8-9a37-4ca8-9790-c219750ab482" (UID: "3e19f9e8-9a37-4ca8-9790-c219750ab482"). InnerVolumeSpecName "kube-api-access-f9495". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.614603 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693268 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovn-node-metrics-cert\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693326 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovnkube-script-lib\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693346 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-slash\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693366 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-log-socket\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693385 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-node-log\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693405 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-var-lib-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693424 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-etc-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693444 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-cni-netd\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693471 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693496 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-run-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693518 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-systemd-units\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693593 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-log-socket\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693796 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovnkube-config\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693828 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-systemd-units\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693865 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-run-netns\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693886 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-run-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693916 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-r596m\" (UniqueName: \"kubernetes.io/projected/11899088-c8cf-4bb7-8a2e-e0137d6546e2-kube-api-access-r596m\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693942 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-cni-netd\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693957 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-run-ovn\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693921 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-etc-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693978 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693971 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-run-netns\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.693994 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-env-overrides\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694040 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-node-log\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694090 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694090 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-var-lib-openvswitch\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694101 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-run-ovn\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694153 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-cni-bin\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694125 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-cni-bin\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694121 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-slash\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694237 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694290 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-kubelet\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694262 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/11899088-c8cf-4bb7-8a2e-e0137d6546e2-host-kubelet\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694401 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f9495\" (UniqueName: \"kubernetes.io/projected/3e19f9e8-9a37-4ca8-9790-c219750ab482-kube-api-access-f9495\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.694851 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovnkube-script-lib\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.695668 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovnkube-config\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.696082 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/11899088-c8cf-4bb7-8a2e-e0137d6546e2-env-overrides\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.696348 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/11899088-c8cf-4bb7-8a2e-e0137d6546e2-ovn-node-metrics-cert\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.696404 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.711720 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-r596m\" (UniqueName: \"kubernetes.io/projected/11899088-c8cf-4bb7-8a2e-e0137d6546e2-kube-api-access-r596m\") pod \"ovnkube-node-ngf94\" (UID: \"11899088-c8cf-4bb7-8a2e-e0137d6546e2\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.717083 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 00:22:07 crc kubenswrapper[3556]: I1128 00:22:07.857178 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.067468 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.102767 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.108615 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.159723 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.205375 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.268031 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-79vsd" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.289552 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.297486 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.356513 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.375263 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.419720 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.423313 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-acl-logging/1.log" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.424059 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-44qcg_3e19f9e8-9a37-4ca8-9790-c219750ab482/ovn-controller/1.log" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.424688 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" event={"ID":"3e19f9e8-9a37-4ca8-9790-c219750ab482","Type":"ContainerDied","Data":"c199f4314aadffe223449b70c532061a711b719d9eb0c631901269df2d2fa349"} Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.424724 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-44qcg" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.424754 3556 scope.go:117] "RemoveContainer" containerID="a592f23f00130a8b85c7f8ff874d278a6eafb49f164470cc714b0b3cb3f14565" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.427398 3556 generic.go:334] "Generic (PLEG): container finished" podID="11899088-c8cf-4bb7-8a2e-e0137d6546e2" containerID="165f63b5c62bbd0688d11597d6464c804f1e1ae2195dd975426d60fc7cab4f5f" exitCode=0 Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.427447 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerDied","Data":"165f63b5c62bbd0688d11597d6464c804f1e1ae2195dd975426d60fc7cab4f5f"} Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.427478 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"0044cdd5856c78eafcd12ec315aebd02995fbcb98bf2d492395f94fa1260e806"} Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.430912 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.455197 3556 scope.go:117] "RemoveContainer" containerID="0e8abae46f875f61a9baba43204ffb75d748b30121e4cc89d5d3403178aaa207" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.489960 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.494692 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-44qcg"] Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.529256 3556 scope.go:117] "RemoveContainer" containerID="46a8560a393a5439aed7b64a6b5a18f76e9777704ab9f4b63d60bc801f21cb8a" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.537408 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.564047 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.593545 3556 scope.go:117] "RemoveContainer" containerID="1f0bc12aff24220a56c1a2424f5c5a776edc66bf8174b52fcc5b43743a6f46d3" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.610709 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.617633 3556 scope.go:117] "RemoveContainer" containerID="2f32e2413540f8b606bace46011915b3f4345f8091da03e50af2414bd037a501" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.642050 3556 scope.go:117] "RemoveContainer" containerID="4f2242c62043fe6b5b8237b1f7367052a86f5f4d37ec86376ad68540f41166b6" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.667293 3556 scope.go:117] "RemoveContainer" containerID="add4c854492fb92ad3dfe4f839c8b265eb256f8ee4a5541e1ffbd5863baf61ef" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.675841 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kpdvz" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.683598 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.693965 3556 scope.go:117] "RemoveContainer" containerID="f667500e31bbd20e18020f3feda9c5fcb95413c4c60f5ae6b409e073c784b3a5" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.753665 3556 scope.go:117] "RemoveContainer" containerID="324b84ed928c7beff552526b8bb7cec0379a0ef0d4d85002e36651b6da681716" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.777481 3556 reflector.go:351] Caches populated for *v1.RuntimeClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.880521 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.920046 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e19f9e8-9a37-4ca8-9790-c219750ab482" path="/var/lib/kubelet/pods/3e19f9e8-9a37-4ca8-9790-c219750ab482/volumes" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.938577 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 00:22:08 crc kubenswrapper[3556]: I1128 00:22:08.947244 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"webhook-serving-cert" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.087993 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.093740 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.155102 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.246347 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.265938 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.270668 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.316638 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.398306 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.406275 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.435593 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"5153aa892e5df894ac8969b2c9667930abf53a1f3870145c6528d6092f5563eb"} Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.435620 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"2dadd3a0d3385a2b006e4463da6b2983ab7e584e17d065009a39cb5f612e9bd8"} Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.435631 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"65f38ea4518e358a83720b60d16ba63f5286bf27076f613ca15d2d7313c55904"} Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.435640 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"15ecfbe6d39484eacbd8f00b41c8ed6be4caef5a733e4421a0d0522d2a2da677"} Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.435649 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"4d0a7dc664beffc9beca0d491b76295c1b8c1b30604a87ccf8fd6934926fff8b"} Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.449062 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.477305 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.527599 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.540789 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.551878 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.571624 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.575706 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.592061 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.740611 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.746394 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.781148 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.840565 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.848112 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.924066 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.935099 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 00:22:09 crc kubenswrapper[3556]: I1128 00:22:09.936966 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.124409 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.308544 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.337066 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.446410 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"cd344a9840e652e88d2a41707c668b18e71fe71e460e6ead6b474225ded198dd"} Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.455060 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.544976 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.551913 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.571081 3556 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.642704 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.670298 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.678544 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.700073 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.701450 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.753846 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.802134 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 00:22:10 crc kubenswrapper[3556]: I1128 00:22:10.861486 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.233537 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.249475 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.259317 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.303579 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.320157 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.347745 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.565082 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.697301 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-q786x" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.759478 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.814811 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.864481 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.879687 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.921942 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 00:22:11 crc kubenswrapper[3556]: I1128 00:22:11.967396 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.084088 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.148601 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.153805 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.160588 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-9r4gl" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.235590 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.235670 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.305402 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.331746 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.457477 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"d772b898b66b620d4639d2772cfe725737009ee0e783d648117ab1ea8a700466"} Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.660286 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.730212 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.744275 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-b4zbk" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.778493 3556 kubelet.go:2439] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.778727 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" containerID="cri-o://d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429" gracePeriod=5 Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.781208 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.898446 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.911701 3556 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229 Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.970624 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 00:22:12 crc kubenswrapper[3556]: I1128 00:22:12.978154 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.035180 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.069361 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.082900 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-ng44q" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.120747 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-r9fjc" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.286848 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.368907 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-twmwc" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.436676 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.446164 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.588420 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.758216 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.818949 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.844367 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.911322 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.918055 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 00:22:13 crc kubenswrapper[3556]: I1128 00:22:13.919057 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.120179 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.165164 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58g82" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.197641 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.206117 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.342688 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.372064 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.410144 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.472312 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" event={"ID":"11899088-c8cf-4bb7-8a2e-e0137d6546e2","Type":"ContainerStarted","Data":"b6d2e908cf8e5538ce7c22c8731cd667b9ea963260043165dffb6890be3a3789"} Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.472947 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.518839 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" podStartSLOduration=7.518799551 podStartE2EDuration="7.518799551s" podCreationTimestamp="2025-11-28 00:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:22:14.513973606 +0000 UTC m=+596.106205676" watchObservedRunningTime="2025-11-28 00:22:14.518799551 +0000 UTC m=+596.111031551" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.522590 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.527879 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.768044 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.843459 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.844274 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.850440 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.914777 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 00:22:14 crc kubenswrapper[3556]: I1128 00:22:14.983938 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.370659 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.476991 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.477053 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.525299 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.637600 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.716002 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.720029 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.978210 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.978526 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 00:22:15 crc kubenswrapper[3556]: I1128 00:22:15.991265 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 00:22:16 crc kubenswrapper[3556]: I1128 00:22:16.131172 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 00:22:16 crc kubenswrapper[3556]: I1128 00:22:16.184164 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 00:22:16 crc kubenswrapper[3556]: I1128 00:22:16.275902 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 00:22:16 crc kubenswrapper[3556]: I1128 00:22:16.686266 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 00:22:16 crc kubenswrapper[3556]: I1128 00:22:16.856824 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 00:22:17 crc kubenswrapper[3556]: I1128 00:22:17.582960 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 00:22:17 crc kubenswrapper[3556]: I1128 00:22:17.661693 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 00:22:17 crc kubenswrapper[3556]: I1128 00:22:17.757858 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.155448 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.383209 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor/0.log" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.383280 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.491933 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_7dae59545f22b3fb679a7fbf878a6379/startup-monitor/0.log" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.491990 3556 generic.go:334] "Generic (PLEG): container finished" podID="7dae59545f22b3fb679a7fbf878a6379" containerID="d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429" exitCode=137 Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.492047 3556 scope.go:117] "RemoveContainer" containerID="d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.492161 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.523822 3556 scope.go:117] "RemoveContainer" containerID="d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429" Nov 28 00:22:18 crc kubenswrapper[3556]: E1128 00:22:18.524399 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429\": container with ID starting with d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429 not found: ID does not exist" containerID="d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.524452 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429"} err="failed to get container status \"d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429\": rpc error: code = NotFound desc = could not find container \"d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429\": container with ID starting with d88bb190f453eb4d8365f6f95fba37707dd77cb7d1f1717a74305848147c2429 not found: ID does not exist" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528077 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528147 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528202 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528208 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock" (OuterVolumeSpecName: "var-lock") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528229 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528357 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") pod \"7dae59545f22b3fb679a7fbf878a6379\" (UID: \"7dae59545f22b3fb679a7fbf878a6379\") " Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528357 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests" (OuterVolumeSpecName: "manifests") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528447 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log" (OuterVolumeSpecName: "var-log") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528569 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528835 3556 reconciler_common.go:300] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528864 3556 reconciler_common.go:300] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528885 3556 reconciler_common.go:300] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.528904 3556 reconciler_common.go:300] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.535260 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "7dae59545f22b3fb679a7fbf878a6379" (UID: "7dae59545f22b3fb679a7fbf878a6379"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.630837 3556 reconciler_common.go:300] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7dae59545f22b3fb679a7fbf878a6379-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.695778 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.695855 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.695887 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.695920 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.695939 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.714359 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.914701 3556 scope.go:117] "RemoveContainer" containerID="b203e8ed09c9350b236814135962bdc19666470cae6146b3024fa04966e01b50" Nov 28 00:22:18 crc kubenswrapper[3556]: E1128 00:22:18.915265 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-q88th_openshift-multus(475321a1-8b7e-4033-8f72-b05a8b377347)\"" pod="openshift-multus/multus-q88th" podUID="475321a1-8b7e-4033-8f72-b05a8b377347" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.925987 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dae59545f22b3fb679a7fbf878a6379" path="/var/lib/kubelet/pods/7dae59545f22b3fb679a7fbf878a6379/volumes" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.926317 3556 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.945923 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.946154 3556 kubelet.go:2639] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c6eb3853-ebcb-4347-8064-63fd11149596" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.959281 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.967361 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 00:22:18 crc kubenswrapper[3556]: I1128 00:22:18.967408 3556 kubelet.go:2663] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c6eb3853-ebcb-4347-8064-63fd11149596" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.050920 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\": container with ID starting with 53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807 not found: ID does not exist" containerID="53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.051129 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807" err="rpc error: code = NotFound desc = could not find container \"53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807\": container with ID starting with 53c859e04188764b0d92baab2d894b8e5cc24fc74718e7837e9bf64ec1096807 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.051594 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c1416b8c6a466079801f9be5d7b27550ab5fd354573f9b32cae64e01ed3f695\": container with ID starting with 9c1416b8c6a466079801f9be5d7b27550ab5fd354573f9b32cae64e01ed3f695 not found: ID does not exist" containerID="9c1416b8c6a466079801f9be5d7b27550ab5fd354573f9b32cae64e01ed3f695" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.051718 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="9c1416b8c6a466079801f9be5d7b27550ab5fd354573f9b32cae64e01ed3f695" err="rpc error: code = NotFound desc = could not find container \"9c1416b8c6a466079801f9be5d7b27550ab5fd354573f9b32cae64e01ed3f695\": container with ID starting with 9c1416b8c6a466079801f9be5d7b27550ab5fd354573f9b32cae64e01ed3f695 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.052350 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3591f295d30983c04e7835762f552a23df79c107a576d1a1b68164323f3b29e4\": container with ID starting with 3591f295d30983c04e7835762f552a23df79c107a576d1a1b68164323f3b29e4 not found: ID does not exist" containerID="3591f295d30983c04e7835762f552a23df79c107a576d1a1b68164323f3b29e4" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.052492 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="3591f295d30983c04e7835762f552a23df79c107a576d1a1b68164323f3b29e4" err="rpc error: code = NotFound desc = could not find container \"3591f295d30983c04e7835762f552a23df79c107a576d1a1b68164323f3b29e4\": container with ID starting with 3591f295d30983c04e7835762f552a23df79c107a576d1a1b68164323f3b29e4 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.052926 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\": container with ID starting with 8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078 not found: ID does not exist" containerID="8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.052982 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078" err="rpc error: code = NotFound desc = could not find container \"8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078\": container with ID starting with 8ec028dd58f3480de1c152178877ef20363db5cdec32732223f3a6419a431078 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.053604 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\": container with ID starting with a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282 not found: ID does not exist" containerID="a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.053658 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282" err="rpc error: code = NotFound desc = could not find container \"a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282\": container with ID starting with a6d2ed4439a7191ab2bfda0bfba1dd031d0a4d540b63ab481e85ae9fcff31282 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.054224 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c806377c89c0ce691a5cb179d3187ae4f02b46440c24281233071fbb06b4366b\": container with ID starting with c806377c89c0ce691a5cb179d3187ae4f02b46440c24281233071fbb06b4366b not found: ID does not exist" containerID="c806377c89c0ce691a5cb179d3187ae4f02b46440c24281233071fbb06b4366b" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.054267 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="c806377c89c0ce691a5cb179d3187ae4f02b46440c24281233071fbb06b4366b" err="rpc error: code = NotFound desc = could not find container \"c806377c89c0ce691a5cb179d3187ae4f02b46440c24281233071fbb06b4366b\": container with ID starting with c806377c89c0ce691a5cb179d3187ae4f02b46440c24281233071fbb06b4366b not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.054575 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"760aee346ddb22427580c02a49a3a1d5ea831c51adeed5dfc8845d170af2f288\": container with ID starting with 760aee346ddb22427580c02a49a3a1d5ea831c51adeed5dfc8845d170af2f288 not found: ID does not exist" containerID="760aee346ddb22427580c02a49a3a1d5ea831c51adeed5dfc8845d170af2f288" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.054609 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="760aee346ddb22427580c02a49a3a1d5ea831c51adeed5dfc8845d170af2f288" err="rpc error: code = NotFound desc = could not find container \"760aee346ddb22427580c02a49a3a1d5ea831c51adeed5dfc8845d170af2f288\": container with ID starting with 760aee346ddb22427580c02a49a3a1d5ea831c51adeed5dfc8845d170af2f288 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.054899 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\": container with ID starting with ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333 not found: ID does not exist" containerID="ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.054935 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333" err="rpc error: code = NotFound desc = could not find container \"ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333\": container with ID starting with ea11448c0ee33a569f6d69d267e792b452d2024239768810e787c3c52f080333 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.055241 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca881e1dddf4d4356899329bc9b2b3ff4ab72b9778cec4323d14c4bb43cf3e1\": container with ID starting with 4ca881e1dddf4d4356899329bc9b2b3ff4ab72b9778cec4323d14c4bb43cf3e1 not found: ID does not exist" containerID="4ca881e1dddf4d4356899329bc9b2b3ff4ab72b9778cec4323d14c4bb43cf3e1" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.055278 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="4ca881e1dddf4d4356899329bc9b2b3ff4ab72b9778cec4323d14c4bb43cf3e1" err="rpc error: code = NotFound desc = could not find container \"4ca881e1dddf4d4356899329bc9b2b3ff4ab72b9778cec4323d14c4bb43cf3e1\": container with ID starting with 4ca881e1dddf4d4356899329bc9b2b3ff4ab72b9778cec4323d14c4bb43cf3e1 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.055580 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\": container with ID starting with caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3 not found: ID does not exist" containerID="caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.055735 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3" err="rpc error: code = NotFound desc = could not find container \"caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3\": container with ID starting with caf1498eec5b51d72767ade594459626b076c4bb41f3b23c2fc33eb01453a9a3 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.056239 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\": container with ID starting with 05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0 not found: ID does not exist" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.056357 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0" err="rpc error: code = NotFound desc = could not find container \"05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0\": container with ID starting with 05c582e8404bde997b8ba5640dc26199d47b5ebbea2e230e2e412df871d70fb0 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.056746 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75\": container with ID starting with e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75 not found: ID does not exist" containerID="e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.056853 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75" err="rpc error: code = NotFound desc = could not find container \"e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75\": container with ID starting with e71cf476a5d1e6e6f82d022d2c969847e5c7f60c746c9ddb24b2031097a46d75 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.058645 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\": container with ID starting with 51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652 not found: ID does not exist" containerID="51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.058682 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652" err="rpc error: code = NotFound desc = could not find container \"51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652\": container with ID starting with 51f404c881ca1db3f692c569d84e776a944969cdc45dcfcd77b3075a4e060652 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.058987 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\": container with ID starting with cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9 not found: ID does not exist" containerID="cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.059047 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9" err="rpc error: code = NotFound desc = could not find container \"cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9\": container with ID starting with cf3635d1dd05337fb3772349412a467c217484455674593de7d1edb2bc2adbb9 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.059353 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\": container with ID starting with 4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e not found: ID does not exist" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.059487 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e" err="rpc error: code = NotFound desc = could not find container \"4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e\": container with ID starting with 4e97072beb2528323d65b4890ecfb0ef05ba152693b45e6024767afae0a51a3e not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.059845 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\": container with ID starting with 4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9 not found: ID does not exist" containerID="4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.059880 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9" err="rpc error: code = NotFound desc = could not find container \"4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9\": container with ID starting with 4cc232018c166e3824fff4f8ae14e927b7e5a62db08fe0d5567989b2f7777db9 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.060608 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\": container with ID starting with 951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa not found: ID does not exist" containerID="951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.060720 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa" err="rpc error: code = NotFound desc = could not find container \"951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa\": container with ID starting with 951a4cb5c15d8b749e1e816613c5f4a5982617b804458c9d6eba980b7a835faa not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.061161 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\": container with ID starting with 246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b not found: ID does not exist" containerID="246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.061280 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b" err="rpc error: code = NotFound desc = could not find container \"246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b\": container with ID starting with 246fe1842a778f99922dcaebdfdf3fa962ff0b42cf53b4960965b9b0952e327b not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.061698 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\": container with ID starting with 6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212 not found: ID does not exist" containerID="6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.061736 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212" err="rpc error: code = NotFound desc = could not find container \"6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212\": container with ID starting with 6f9447e8f0f71aa93b7c7f0c65de304ff89f68bd3a2fffd95eb58cbb2e4d7212 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.062112 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\": container with ID starting with 2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5 not found: ID does not exist" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.062156 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5" err="rpc error: code = NotFound desc = could not find container \"2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5\": container with ID starting with 2906227d65faf2af2509e2b4ea74c41122d8a9457e0a781b50f921dacf31f6e5 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.062673 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9\": container with ID starting with a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9 not found: ID does not exist" containerID="a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.062777 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9" err="rpc error: code = NotFound desc = could not find container \"a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9\": container with ID starting with a12818978287aa2891509aac46a2dffcb4a4895e9ad613cdd64b4d713d4507b9 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: E1128 00:22:19.063183 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\": container with ID starting with c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6 not found: ID does not exist" containerID="c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.063381 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6" err="rpc error: code = NotFound desc = could not find container \"c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6\": container with ID starting with c9cafe264502238216f6bc8f6ac8722c0852ff7081ab9873e558d2d0d08e89b6 not found: ID does not exist" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.108528 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 00:22:19 crc kubenswrapper[3556]: I1128 00:22:19.331583 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 00:22:20 crc kubenswrapper[3556]: I1128 00:22:20.064414 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 00:22:22 crc kubenswrapper[3556]: I1128 00:22:22.663619 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:22:22 crc kubenswrapper[3556]: I1128 00:22:22.664038 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:22:31 crc kubenswrapper[3556]: I1128 00:22:31.913357 3556 scope.go:117] "RemoveContainer" containerID="b203e8ed09c9350b236814135962bdc19666470cae6146b3024fa04966e01b50" Nov 28 00:22:32 crc kubenswrapper[3556]: I1128 00:22:32.579760 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-q88th_475321a1-8b7e-4033-8f72-b05a8b377347/kube-multus/8.log" Nov 28 00:22:32 crc kubenswrapper[3556]: I1128 00:22:32.580424 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-q88th" event={"ID":"475321a1-8b7e-4033-8f72-b05a8b377347","Type":"ContainerStarted","Data":"071d996e5a82105ebad0ab91472e62f37c21bd388d12699ca931d03be4d22765"} Nov 28 00:22:37 crc kubenswrapper[3556]: I1128 00:22:37.921471 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngf94" Nov 28 00:22:52 crc kubenswrapper[3556]: I1128 00:22:52.663818 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:22:52 crc kubenswrapper[3556]: I1128 00:22:52.665901 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:22:52 crc kubenswrapper[3556]: I1128 00:22:52.666150 3556 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:22:52 crc kubenswrapper[3556]: I1128 00:22:52.667120 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"756add6244838c2be85afcde4726595ecd7b69e02660adc403684ace5b7b9f01"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 00:22:52 crc kubenswrapper[3556]: I1128 00:22:52.667423 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://756add6244838c2be85afcde4726595ecd7b69e02660adc403684ace5b7b9f01" gracePeriod=600 Nov 28 00:22:53 crc kubenswrapper[3556]: I1128 00:22:53.714547 3556 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="756add6244838c2be85afcde4726595ecd7b69e02660adc403684ace5b7b9f01" exitCode=0 Nov 28 00:22:53 crc kubenswrapper[3556]: I1128 00:22:53.714615 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"756add6244838c2be85afcde4726595ecd7b69e02660adc403684ace5b7b9f01"} Nov 28 00:22:53 crc kubenswrapper[3556]: I1128 00:22:53.715235 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"88c4fb4cb642fcbc411ede2f7fa1488222a3e7056a17bfed36ddfaeda62f2163"} Nov 28 00:22:53 crc kubenswrapper[3556]: I1128 00:22:53.715267 3556 scope.go:117] "RemoveContainer" containerID="acafa606c4aa1bb9f7edfa1daf5c757ca7084d520498133fa4c1d1f00743db14" Nov 28 00:22:57 crc kubenswrapper[3556]: I1128 00:22:57.464701 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zs8v"] Nov 28 00:22:57 crc kubenswrapper[3556]: I1128 00:22:57.466416 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8zs8v" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerName="registry-server" containerID="cri-o://4c6d2c29f117acbf39f09ada99524e3dac8a0de8e96b417d647f6cfa3f7424af" gracePeriod=30 Nov 28 00:22:57 crc kubenswrapper[3556]: I1128 00:22:57.502863 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zs8v"] Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.477161 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dhp4b"] Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.477560 3556 topology_manager.go:215] "Topology Admit Handler" podUID="d25169d3-2955-4003-94c3-ad5bf3298b88" podNamespace="openshift-marketplace" podName="redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: E1128 00:22:58.477724 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.477736 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.477832 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dae59545f22b3fb679a7fbf878a6379" containerName="startup-monitor" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.478530 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.481970 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhp4b"] Nov 28 00:22:58 crc kubenswrapper[3556]: E1128 00:22:58.491952 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[catalog-content kube-api-access-5sw8n utilities], unattached volumes=[], failed to process volumes=[catalog-content kube-api-access-5sw8n utilities]: context canceled" pod="openshift-marketplace/redhat-marketplace-dhp4b" podUID="d25169d3-2955-4003-94c3-ad5bf3298b88" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.493070 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhp4b"] Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.634168 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-catalog-content\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.634239 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sw8n\" (UniqueName: \"kubernetes.io/projected/d25169d3-2955-4003-94c3-ad5bf3298b88-kube-api-access-5sw8n\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.634371 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-utilities\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.735278 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-catalog-content\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.735339 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-5sw8n\" (UniqueName: \"kubernetes.io/projected/d25169d3-2955-4003-94c3-ad5bf3298b88-kube-api-access-5sw8n\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.735369 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-utilities\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.735843 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-catalog-content\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.735900 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-utilities\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.736362 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.743814 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.756420 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sw8n\" (UniqueName: \"kubernetes.io/projected/d25169d3-2955-4003-94c3-ad5bf3298b88-kube-api-access-5sw8n\") pod \"redhat-marketplace-dhp4b\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.836181 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-catalog-content\") pod \"d25169d3-2955-4003-94c3-ad5bf3298b88\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.836305 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-utilities\") pod \"d25169d3-2955-4003-94c3-ad5bf3298b88\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.836450 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sw8n\" (UniqueName: \"kubernetes.io/projected/d25169d3-2955-4003-94c3-ad5bf3298b88-kube-api-access-5sw8n\") pod \"d25169d3-2955-4003-94c3-ad5bf3298b88\" (UID: \"d25169d3-2955-4003-94c3-ad5bf3298b88\") " Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.836587 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-utilities" (OuterVolumeSpecName: "utilities") pod "d25169d3-2955-4003-94c3-ad5bf3298b88" (UID: "d25169d3-2955-4003-94c3-ad5bf3298b88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.836828 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.836489 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d25169d3-2955-4003-94c3-ad5bf3298b88" (UID: "d25169d3-2955-4003-94c3-ad5bf3298b88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.839248 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25169d3-2955-4003-94c3-ad5bf3298b88-kube-api-access-5sw8n" (OuterVolumeSpecName: "kube-api-access-5sw8n") pod "d25169d3-2955-4003-94c3-ad5bf3298b88" (UID: "d25169d3-2955-4003-94c3-ad5bf3298b88"). InnerVolumeSpecName "kube-api-access-5sw8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.937680 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d25169d3-2955-4003-94c3-ad5bf3298b88-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:58 crc kubenswrapper[3556]: I1128 00:22:58.937721 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5sw8n\" (UniqueName: \"kubernetes.io/projected/d25169d3-2955-4003-94c3-ad5bf3298b88-kube-api-access-5sw8n\") on node \"crc\" DevicePath \"\"" Nov 28 00:22:59 crc kubenswrapper[3556]: I1128 00:22:59.740735 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhp4b" Nov 28 00:22:59 crc kubenswrapper[3556]: I1128 00:22:59.779003 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhp4b"] Nov 28 00:22:59 crc kubenswrapper[3556]: I1128 00:22:59.789730 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhp4b"] Nov 28 00:22:59 crc kubenswrapper[3556]: I1128 00:22:59.891241 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qvn54"] Nov 28 00:22:59 crc kubenswrapper[3556]: I1128 00:22:59.891349 3556 topology_manager.go:215] "Topology Admit Handler" podUID="5658ffff-996e-4f3a-a29c-2e04cb3a60ba" podNamespace="openshift-marketplace" podName="redhat-marketplace-qvn54" Nov 28 00:22:59 crc kubenswrapper[3556]: I1128 00:22:59.892404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:22:59 crc kubenswrapper[3556]: I1128 00:22:59.912833 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvn54"] Nov 28 00:22:59 crc kubenswrapper[3556]: E1128 00:22:59.913362 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[catalog-content kube-api-access-zgh27 utilities], unattached volumes=[], failed to process volumes=[catalog-content kube-api-access-zgh27 utilities]: context canceled" pod="openshift-marketplace/redhat-marketplace-qvn54" podUID="5658ffff-996e-4f3a-a29c-2e04cb3a60ba" Nov 28 00:22:59 crc kubenswrapper[3556]: I1128 00:22:59.941312 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvn54"] Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.052157 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-catalog-content\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.052263 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-utilities\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.053236 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgh27\" (UniqueName: \"kubernetes.io/projected/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-kube-api-access-zgh27\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.154535 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-catalog-content\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.154601 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-utilities\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.154652 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zgh27\" (UniqueName: \"kubernetes.io/projected/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-kube-api-access-zgh27\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.155057 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-utilities\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.155279 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-catalog-content\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.173116 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgh27\" (UniqueName: \"kubernetes.io/projected/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-kube-api-access-zgh27\") pod \"redhat-marketplace-qvn54\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.745364 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.753106 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.861685 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-utilities\") pod \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.861767 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-catalog-content\") pod \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.861823 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgh27\" (UniqueName: \"kubernetes.io/projected/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-kube-api-access-zgh27\") pod \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\" (UID: \"5658ffff-996e-4f3a-a29c-2e04cb3a60ba\") " Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.862939 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-utilities" (OuterVolumeSpecName: "utilities") pod "5658ffff-996e-4f3a-a29c-2e04cb3a60ba" (UID: "5658ffff-996e-4f3a-a29c-2e04cb3a60ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.863088 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5658ffff-996e-4f3a-a29c-2e04cb3a60ba" (UID: "5658ffff-996e-4f3a-a29c-2e04cb3a60ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.872229 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-kube-api-access-zgh27" (OuterVolumeSpecName: "kube-api-access-zgh27") pod "5658ffff-996e-4f3a-a29c-2e04cb3a60ba" (UID: "5658ffff-996e-4f3a-a29c-2e04cb3a60ba"). InnerVolumeSpecName "kube-api-access-zgh27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.919143 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d25169d3-2955-4003-94c3-ad5bf3298b88" path="/var/lib/kubelet/pods/d25169d3-2955-4003-94c3-ad5bf3298b88/volumes" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.963591 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zgh27\" (UniqueName: \"kubernetes.io/projected/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-kube-api-access-zgh27\") on node \"crc\" DevicePath \"\"" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.963627 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:23:00 crc kubenswrapper[3556]: I1128 00:23:00.963637 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5658ffff-996e-4f3a-a29c-2e04cb3a60ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:23:01 crc kubenswrapper[3556]: I1128 00:23:01.749076 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvn54" Nov 28 00:23:01 crc kubenswrapper[3556]: I1128 00:23:01.787872 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvn54"] Nov 28 00:23:01 crc kubenswrapper[3556]: I1128 00:23:01.792409 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvn54"] Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.755385 3556 generic.go:334] "Generic (PLEG): container finished" podID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerID="4c6d2c29f117acbf39f09ada99524e3dac8a0de8e96b417d647f6cfa3f7424af" exitCode=0 Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.755444 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zs8v" event={"ID":"7a4a4778-a2d1-49b1-942b-0cf262013ba4","Type":"ContainerDied","Data":"4c6d2c29f117acbf39f09ada99524e3dac8a0de8e96b417d647f6cfa3f7424af"} Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.755754 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8zs8v" event={"ID":"7a4a4778-a2d1-49b1-942b-0cf262013ba4","Type":"ContainerDied","Data":"36e4d63bd838f088645fadd0dbef55525b2fdbe0d6229cd4b5fc449c7088e7c6"} Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.755772 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e4d63bd838f088645fadd0dbef55525b2fdbe0d6229cd4b5fc449c7088e7c6" Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.795256 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.886214 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p69r5\" (UniqueName: \"kubernetes.io/projected/7a4a4778-a2d1-49b1-942b-0cf262013ba4-kube-api-access-p69r5\") pod \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.886571 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-catalog-content\") pod \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.886720 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-utilities\") pod \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\" (UID: \"7a4a4778-a2d1-49b1-942b-0cf262013ba4\") " Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.887499 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-utilities" (OuterVolumeSpecName: "utilities") pod "7a4a4778-a2d1-49b1-942b-0cf262013ba4" (UID: "7a4a4778-a2d1-49b1-942b-0cf262013ba4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.892239 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4a4778-a2d1-49b1-942b-0cf262013ba4-kube-api-access-p69r5" (OuterVolumeSpecName: "kube-api-access-p69r5") pod "7a4a4778-a2d1-49b1-942b-0cf262013ba4" (UID: "7a4a4778-a2d1-49b1-942b-0cf262013ba4"). InnerVolumeSpecName "kube-api-access-p69r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.918316 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5658ffff-996e-4f3a-a29c-2e04cb3a60ba" path="/var/lib/kubelet/pods/5658ffff-996e-4f3a-a29c-2e04cb3a60ba/volumes" Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.988054 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p69r5\" (UniqueName: \"kubernetes.io/projected/7a4a4778-a2d1-49b1-942b-0cf262013ba4-kube-api-access-p69r5\") on node \"crc\" DevicePath \"\"" Nov 28 00:23:02 crc kubenswrapper[3556]: I1128 00:23:02.988091 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:23:03 crc kubenswrapper[3556]: I1128 00:23:03.028641 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a4a4778-a2d1-49b1-942b-0cf262013ba4" (UID: "7a4a4778-a2d1-49b1-942b-0cf262013ba4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:23:03 crc kubenswrapper[3556]: I1128 00:23:03.089120 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4a4778-a2d1-49b1-942b-0cf262013ba4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:23:03 crc kubenswrapper[3556]: I1128 00:23:03.762652 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8zs8v" Nov 28 00:23:03 crc kubenswrapper[3556]: I1128 00:23:03.829754 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zs8v"] Nov 28 00:23:03 crc kubenswrapper[3556]: I1128 00:23:03.832930 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8zs8v"] Nov 28 00:23:04 crc kubenswrapper[3556]: I1128 00:23:04.919126 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" path="/var/lib/kubelet/pods/7a4a4778-a2d1-49b1-942b-0cf262013ba4/volumes" Nov 28 00:23:18 crc kubenswrapper[3556]: I1128 00:23:18.697244 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:23:18 crc kubenswrapper[3556]: I1128 00:23:18.697879 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:23:18 crc kubenswrapper[3556]: I1128 00:23:18.697924 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:23:18 crc kubenswrapper[3556]: I1128 00:23:18.697943 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:23:18 crc kubenswrapper[3556]: I1128 00:23:18.697978 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:23:19 crc kubenswrapper[3556]: I1128 00:23:19.065956 3556 scope.go:117] "RemoveContainer" containerID="a4446cf6d632203fec2dc2f4a665c4d5f6f7845e3b939f28951ebb1abcf24d76" Nov 28 00:23:19 crc kubenswrapper[3556]: I1128 00:23:19.105064 3556 scope.go:117] "RemoveContainer" containerID="9bc0dcb69f1cd164a47ddc2af0bcd338fc02f94aa714e3112778d49bc05bc011" Nov 28 00:23:19 crc kubenswrapper[3556]: I1128 00:23:19.131887 3556 scope.go:117] "RemoveContainer" containerID="4c6d2c29f117acbf39f09ada99524e3dac8a0de8e96b417d647f6cfa3f7424af" Nov 28 00:24:18 crc kubenswrapper[3556]: I1128 00:24:18.698398 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:24:18 crc kubenswrapper[3556]: I1128 00:24:18.698874 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:24:18 crc kubenswrapper[3556]: I1128 00:24:18.698891 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:24:18 crc kubenswrapper[3556]: I1128 00:24:18.698919 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:24:18 crc kubenswrapper[3556]: I1128 00:24:18.698944 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:24:19 crc kubenswrapper[3556]: I1128 00:24:19.180070 3556 scope.go:117] "RemoveContainer" containerID="e0339d9059bd6726a67d5583d49cbdf770c54bafe9a81b1ee2f46e295f6d4810" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.615876 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd"] Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.618249 3556 topology_manager.go:215] "Topology Admit Handler" podUID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" podNamespace="openshift-marketplace" podName="6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: E1128 00:24:24.618523 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerName="registry-server" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.618600 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerName="registry-server" Nov 28 00:24:24 crc kubenswrapper[3556]: E1128 00:24:24.618671 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerName="extract-content" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.618735 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerName="extract-content" Nov 28 00:24:24 crc kubenswrapper[3556]: E1128 00:24:24.618812 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerName="extract-utilities" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.618896 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerName="extract-utilities" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.619101 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4a4778-a2d1-49b1-942b-0cf262013ba4" containerName="registry-server" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.620232 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.628890 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-4w6pc" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.644171 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd"] Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.648906 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444"] Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.649029 3556 topology_manager.go:215] "Topology Admit Handler" podUID="f3824391-427a-4382-9971-0a119acc3392" podNamespace="openshift-marketplace" podName="6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.649924 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.679405 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444"] Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.713156 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5"] Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.713532 3556 topology_manager.go:215] "Topology Admit Handler" podUID="996c7ba9-f850-43cf-8cc9-37ed57473f15" podNamespace="openshift-marketplace" podName="8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.714772 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.721091 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5"] Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.787348 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.787449 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqd6k\" (UniqueName: \"kubernetes.io/projected/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-kube-api-access-rqd6k\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.787502 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.787532 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lp99\" (UniqueName: \"kubernetes.io/projected/f3824391-427a-4382-9971-0a119acc3392-kube-api-access-9lp99\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.787565 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.787593 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.888267 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.888371 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.888408 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9lp99\" (UniqueName: \"kubernetes.io/projected/f3824391-427a-4382-9971-0a119acc3392-kube-api-access-9lp99\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.888439 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.888891 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.888969 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.889040 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj5vq\" (UniqueName: \"kubernetes.io/projected/996c7ba9-f850-43cf-8cc9-37ed57473f15-kube-api-access-dj5vq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.889086 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.889403 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.889761 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.889799 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.889879 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rqd6k\" (UniqueName: \"kubernetes.io/projected/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-kube-api-access-rqd6k\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.890217 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.905552 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqd6k\" (UniqueName: \"kubernetes.io/projected/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-kube-api-access-rqd6k\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.906158 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lp99\" (UniqueName: \"kubernetes.io/projected/f3824391-427a-4382-9971-0a119acc3392-kube-api-access-9lp99\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.935699 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.991866 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dj5vq\" (UniqueName: \"kubernetes.io/projected/996c7ba9-f850-43cf-8cc9-37ed57473f15-kube-api-access-dj5vq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.991996 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.992051 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.992547 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:24 crc kubenswrapper[3556]: I1128 00:24:24.992574 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.004860 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.010165 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj5vq\" (UniqueName: \"kubernetes.io/projected/996c7ba9-f850-43cf-8cc9-37ed57473f15-kube-api-access-dj5vq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.045523 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.171573 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j"] Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.171683 3556 topology_manager.go:215] "Topology Admit Handler" podUID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" podNamespace="openshift-marketplace" podName="695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.172603 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.191584 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j"] Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.215449 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444"] Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.256484 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" event={"ID":"f3824391-427a-4382-9971-0a119acc3392","Type":"ContainerStarted","Data":"9713bd2b0ca286fae1702c7ede0116a2811261b393c04ad001da83f9b55423e4"} Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.270230 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5"] Nov 28 00:24:25 crc kubenswrapper[3556]: W1128 00:24:25.273248 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996c7ba9_f850_43cf_8cc9_37ed57473f15.slice/crio-13c8ec25ffc9d42a776191e7aa7ec32b87b44692c7f5b6d6226952c93bbc86ce WatchSource:0}: Error finding container 13c8ec25ffc9d42a776191e7aa7ec32b87b44692c7f5b6d6226952c93bbc86ce: Status 404 returned error can't find the container with id 13c8ec25ffc9d42a776191e7aa7ec32b87b44692c7f5b6d6226952c93bbc86ce Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.297707 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pnf\" (UniqueName: \"kubernetes.io/projected/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-kube-api-access-q4pnf\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.297769 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.297940 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.332991 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd"] Nov 28 00:24:25 crc kubenswrapper[3556]: W1128 00:24:25.343077 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bc470cf_2bf2_4551_8f7b_85c8d6e3005c.slice/crio-ee634b98d29a3694e0b460dd834f5678fa3c8029972e0d47add8c0d4131266f7 WatchSource:0}: Error finding container ee634b98d29a3694e0b460dd834f5678fa3c8029972e0d47add8c0d4131266f7: Status 404 returned error can't find the container with id ee634b98d29a3694e0b460dd834f5678fa3c8029972e0d47add8c0d4131266f7 Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.398727 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pnf\" (UniqueName: \"kubernetes.io/projected/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-kube-api-access-q4pnf\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.398786 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.398845 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.399348 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-bundle\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.399597 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-util\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.418951 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pnf\" (UniqueName: \"kubernetes.io/projected/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-kube-api-access-q4pnf\") pod \"695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.488046 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:25 crc kubenswrapper[3556]: I1128 00:24:25.661711 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j"] Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.263148 3556 generic.go:334] "Generic (PLEG): container finished" podID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerID="af015974815db7e077cda72e95a1d11907df07c9b9f5eebe0c944d051e0829de" exitCode=0 Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.263254 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" event={"ID":"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e","Type":"ContainerDied","Data":"af015974815db7e077cda72e95a1d11907df07c9b9f5eebe0c944d051e0829de"} Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.263292 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" event={"ID":"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e","Type":"ContainerStarted","Data":"20b7cf192e868c2d909085452733aa5ca8586b97c1010c456c7d5a73e5056884"} Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.265245 3556 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.265467 3556 generic.go:334] "Generic (PLEG): container finished" podID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerID="66cb433ff1ec7b516729426a8e731d932e5428721cea3061b04c81e752d5bb5b" exitCode=0 Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.265522 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" event={"ID":"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c","Type":"ContainerDied","Data":"66cb433ff1ec7b516729426a8e731d932e5428721cea3061b04c81e752d5bb5b"} Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.265541 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" event={"ID":"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c","Type":"ContainerStarted","Data":"ee634b98d29a3694e0b460dd834f5678fa3c8029972e0d47add8c0d4131266f7"} Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.268031 3556 generic.go:334] "Generic (PLEG): container finished" podID="f3824391-427a-4382-9971-0a119acc3392" containerID="d59dccac3dee63426aef44eb822e5b6ab0471d6946bc4c1faa544ff5d76a0508" exitCode=0 Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.268104 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" event={"ID":"f3824391-427a-4382-9971-0a119acc3392","Type":"ContainerDied","Data":"d59dccac3dee63426aef44eb822e5b6ab0471d6946bc4c1faa544ff5d76a0508"} Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.269441 3556 generic.go:334] "Generic (PLEG): container finished" podID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerID="a72a7761cadee5726ddf4353981d485d343bf6b192ec97e675c7fb9ec56ce46c" exitCode=0 Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.269481 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" event={"ID":"996c7ba9-f850-43cf-8cc9-37ed57473f15","Type":"ContainerDied","Data":"a72a7761cadee5726ddf4353981d485d343bf6b192ec97e675c7fb9ec56ce46c"} Nov 28 00:24:26 crc kubenswrapper[3556]: I1128 00:24:26.269507 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" event={"ID":"996c7ba9-f850-43cf-8cc9-37ed57473f15","Type":"ContainerStarted","Data":"13c8ec25ffc9d42a776191e7aa7ec32b87b44692c7f5b6d6226952c93bbc86ce"} Nov 28 00:24:29 crc kubenswrapper[3556]: I1128 00:24:29.290481 3556 generic.go:334] "Generic (PLEG): container finished" podID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerID="ddaf78eab99b7fa24c6294c7de87a2bbba1840d600eb1e5db4eec9ad56b4cf8f" exitCode=0 Nov 28 00:24:29 crc kubenswrapper[3556]: I1128 00:24:29.290558 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" event={"ID":"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c","Type":"ContainerDied","Data":"ddaf78eab99b7fa24c6294c7de87a2bbba1840d600eb1e5db4eec9ad56b4cf8f"} Nov 28 00:24:29 crc kubenswrapper[3556]: I1128 00:24:29.298532 3556 generic.go:334] "Generic (PLEG): container finished" podID="f3824391-427a-4382-9971-0a119acc3392" containerID="3bbab00fb7320586ebd260a67cd7d55f815278d2966d50a49a140425ae79cba0" exitCode=0 Nov 28 00:24:29 crc kubenswrapper[3556]: I1128 00:24:29.298576 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" event={"ID":"f3824391-427a-4382-9971-0a119acc3392","Type":"ContainerDied","Data":"3bbab00fb7320586ebd260a67cd7d55f815278d2966d50a49a140425ae79cba0"} Nov 28 00:24:29 crc kubenswrapper[3556]: I1128 00:24:29.303184 3556 generic.go:334] "Generic (PLEG): container finished" podID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerID="ace9bb169b1394faf869bd2bcb6ad5bdfe41622cc7a6f1f96997da11a879f640" exitCode=0 Nov 28 00:24:29 crc kubenswrapper[3556]: I1128 00:24:29.303232 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" event={"ID":"996c7ba9-f850-43cf-8cc9-37ed57473f15","Type":"ContainerDied","Data":"ace9bb169b1394faf869bd2bcb6ad5bdfe41622cc7a6f1f96997da11a879f640"} Nov 28 00:24:30 crc kubenswrapper[3556]: I1128 00:24:30.309761 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" event={"ID":"f3824391-427a-4382-9971-0a119acc3392","Type":"ContainerStarted","Data":"c9869a3b5f7bb9e39f3ee2eaef315b15248b204ba156bf5444517536ea5d2315"} Nov 28 00:24:30 crc kubenswrapper[3556]: I1128 00:24:30.314732 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" event={"ID":"996c7ba9-f850-43cf-8cc9-37ed57473f15","Type":"ContainerStarted","Data":"9adf3e871307546b086cf18349604f245fd58b59b1e1381189aef0a098e066ff"} Nov 28 00:24:30 crc kubenswrapper[3556]: I1128 00:24:30.318777 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" event={"ID":"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c","Type":"ContainerStarted","Data":"7c433435ec6f156d2a8f706fa7b79fda80b6e517ef3ca4a3997a9bcc495918a9"} Nov 28 00:24:30 crc kubenswrapper[3556]: I1128 00:24:30.359519 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" podStartSLOduration=4.575292558 podStartE2EDuration="6.359478237s" podCreationTimestamp="2025-11-28 00:24:24 +0000 UTC" firstStartedPulling="2025-11-28 00:24:26.266687638 +0000 UTC m=+727.858919628" lastFinishedPulling="2025-11-28 00:24:28.050873327 +0000 UTC m=+729.643105307" observedRunningTime="2025-11-28 00:24:30.355912517 +0000 UTC m=+731.948144507" watchObservedRunningTime="2025-11-28 00:24:30.359478237 +0000 UTC m=+731.951710227" Nov 28 00:24:30 crc kubenswrapper[3556]: I1128 00:24:30.360822 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" podStartSLOduration=4.584524437 podStartE2EDuration="6.360805427s" podCreationTimestamp="2025-11-28 00:24:24 +0000 UTC" firstStartedPulling="2025-11-28 00:24:26.269300527 +0000 UTC m=+727.861532527" lastFinishedPulling="2025-11-28 00:24:28.045581527 +0000 UTC m=+729.637813517" observedRunningTime="2025-11-28 00:24:30.333830455 +0000 UTC m=+731.926062455" watchObservedRunningTime="2025-11-28 00:24:30.360805427 +0000 UTC m=+731.953037417" Nov 28 00:24:30 crc kubenswrapper[3556]: I1128 00:24:30.380525 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" podStartSLOduration=4.598530216 podStartE2EDuration="6.380480244s" podCreationTimestamp="2025-11-28 00:24:24 +0000 UTC" firstStartedPulling="2025-11-28 00:24:26.270623177 +0000 UTC m=+727.862855167" lastFinishedPulling="2025-11-28 00:24:28.052573195 +0000 UTC m=+729.644805195" observedRunningTime="2025-11-28 00:24:30.378227254 +0000 UTC m=+731.970459244" watchObservedRunningTime="2025-11-28 00:24:30.380480244 +0000 UTC m=+731.972712244" Nov 28 00:24:31 crc kubenswrapper[3556]: I1128 00:24:31.324519 3556 generic.go:334] "Generic (PLEG): container finished" podID="f3824391-427a-4382-9971-0a119acc3392" containerID="c9869a3b5f7bb9e39f3ee2eaef315b15248b204ba156bf5444517536ea5d2315" exitCode=0 Nov 28 00:24:31 crc kubenswrapper[3556]: I1128 00:24:31.324639 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" event={"ID":"f3824391-427a-4382-9971-0a119acc3392","Type":"ContainerDied","Data":"c9869a3b5f7bb9e39f3ee2eaef315b15248b204ba156bf5444517536ea5d2315"} Nov 28 00:24:31 crc kubenswrapper[3556]: I1128 00:24:31.328051 3556 generic.go:334] "Generic (PLEG): container finished" podID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerID="9adf3e871307546b086cf18349604f245fd58b59b1e1381189aef0a098e066ff" exitCode=0 Nov 28 00:24:31 crc kubenswrapper[3556]: I1128 00:24:31.328164 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" event={"ID":"996c7ba9-f850-43cf-8cc9-37ed57473f15","Type":"ContainerDied","Data":"9adf3e871307546b086cf18349604f245fd58b59b1e1381189aef0a098e066ff"} Nov 28 00:24:31 crc kubenswrapper[3556]: I1128 00:24:31.330126 3556 generic.go:334] "Generic (PLEG): container finished" podID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerID="fdede00c32f5b3d259a5caee56643fa84d35e93aac768295c9a132e62a9487ff" exitCode=0 Nov 28 00:24:31 crc kubenswrapper[3556]: I1128 00:24:31.330212 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" event={"ID":"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e","Type":"ContainerDied","Data":"fdede00c32f5b3d259a5caee56643fa84d35e93aac768295c9a132e62a9487ff"} Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.335426 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" event={"ID":"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e","Type":"ContainerStarted","Data":"b4d7b0e6be08b4b22248ab4b2a49e530eee97bd438a28aa13b927a0a1a5311ee"} Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.336944 3556 generic.go:334] "Generic (PLEG): container finished" podID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerID="7c433435ec6f156d2a8f706fa7b79fda80b6e517ef3ca4a3997a9bcc495918a9" exitCode=0 Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.337041 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" event={"ID":"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c","Type":"ContainerDied","Data":"7c433435ec6f156d2a8f706fa7b79fda80b6e517ef3ca4a3997a9bcc495918a9"} Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.353150 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" podStartSLOduration=3.607216274 podStartE2EDuration="7.353103023s" podCreationTimestamp="2025-11-28 00:24:25 +0000 UTC" firstStartedPulling="2025-11-28 00:24:26.264923658 +0000 UTC m=+727.857155658" lastFinishedPulling="2025-11-28 00:24:30.010810407 +0000 UTC m=+731.603042407" observedRunningTime="2025-11-28 00:24:32.348906288 +0000 UTC m=+733.941138288" watchObservedRunningTime="2025-11-28 00:24:32.353103023 +0000 UTC m=+733.945335023" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.637974 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.643823 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.797679 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-util\") pod \"f3824391-427a-4382-9971-0a119acc3392\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.797743 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-bundle\") pod \"f3824391-427a-4382-9971-0a119acc3392\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.797813 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj5vq\" (UniqueName: \"kubernetes.io/projected/996c7ba9-f850-43cf-8cc9-37ed57473f15-kube-api-access-dj5vq\") pod \"996c7ba9-f850-43cf-8cc9-37ed57473f15\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.797874 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lp99\" (UniqueName: \"kubernetes.io/projected/f3824391-427a-4382-9971-0a119acc3392-kube-api-access-9lp99\") pod \"f3824391-427a-4382-9971-0a119acc3392\" (UID: \"f3824391-427a-4382-9971-0a119acc3392\") " Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.797922 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-util\") pod \"996c7ba9-f850-43cf-8cc9-37ed57473f15\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.797967 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-bundle\") pod \"996c7ba9-f850-43cf-8cc9-37ed57473f15\" (UID: \"996c7ba9-f850-43cf-8cc9-37ed57473f15\") " Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.798806 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-bundle" (OuterVolumeSpecName: "bundle") pod "f3824391-427a-4382-9971-0a119acc3392" (UID: "f3824391-427a-4382-9971-0a119acc3392"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.799042 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-bundle" (OuterVolumeSpecName: "bundle") pod "996c7ba9-f850-43cf-8cc9-37ed57473f15" (UID: "996c7ba9-f850-43cf-8cc9-37ed57473f15"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.804251 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996c7ba9-f850-43cf-8cc9-37ed57473f15-kube-api-access-dj5vq" (OuterVolumeSpecName: "kube-api-access-dj5vq") pod "996c7ba9-f850-43cf-8cc9-37ed57473f15" (UID: "996c7ba9-f850-43cf-8cc9-37ed57473f15"). InnerVolumeSpecName "kube-api-access-dj5vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.804300 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3824391-427a-4382-9971-0a119acc3392-kube-api-access-9lp99" (OuterVolumeSpecName: "kube-api-access-9lp99") pod "f3824391-427a-4382-9971-0a119acc3392" (UID: "f3824391-427a-4382-9971-0a119acc3392"). InnerVolumeSpecName "kube-api-access-9lp99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.809194 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-util" (OuterVolumeSpecName: "util") pod "996c7ba9-f850-43cf-8cc9-37ed57473f15" (UID: "996c7ba9-f850-43cf-8cc9-37ed57473f15"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.819066 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-util" (OuterVolumeSpecName: "util") pod "f3824391-427a-4382-9971-0a119acc3392" (UID: "f3824391-427a-4382-9971-0a119acc3392"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.899177 3556 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.899220 3556 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-util\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.899234 3556 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3824391-427a-4382-9971-0a119acc3392-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.899249 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dj5vq\" (UniqueName: \"kubernetes.io/projected/996c7ba9-f850-43cf-8cc9-37ed57473f15-kube-api-access-dj5vq\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.899262 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9lp99\" (UniqueName: \"kubernetes.io/projected/f3824391-427a-4382-9971-0a119acc3392-kube-api-access-9lp99\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:32 crc kubenswrapper[3556]: I1128 00:24:32.899276 3556 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/996c7ba9-f850-43cf-8cc9-37ed57473f15-util\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.345914 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" event={"ID":"996c7ba9-f850-43cf-8cc9-37ed57473f15","Type":"ContainerDied","Data":"13c8ec25ffc9d42a776191e7aa7ec32b87b44692c7f5b6d6226952c93bbc86ce"} Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.346881 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c8ec25ffc9d42a776191e7aa7ec32b87b44692c7f5b6d6226952c93bbc86ce" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.345936 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.348361 3556 generic.go:334] "Generic (PLEG): container finished" podID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerID="b4d7b0e6be08b4b22248ab4b2a49e530eee97bd438a28aa13b927a0a1a5311ee" exitCode=0 Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.348452 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" event={"ID":"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e","Type":"ContainerDied","Data":"b4d7b0e6be08b4b22248ab4b2a49e530eee97bd438a28aa13b927a0a1a5311ee"} Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.352289 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.352395 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444" event={"ID":"f3824391-427a-4382-9971-0a119acc3392","Type":"ContainerDied","Data":"9713bd2b0ca286fae1702c7ede0116a2811261b393c04ad001da83f9b55423e4"} Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.352564 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9713bd2b0ca286fae1702c7ede0116a2811261b393c04ad001da83f9b55423e4" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.641530 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.712969 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-bundle\") pod \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.713074 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqd6k\" (UniqueName: \"kubernetes.io/projected/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-kube-api-access-rqd6k\") pod \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.713198 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-util\") pod \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\" (UID: \"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c\") " Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.714991 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-bundle" (OuterVolumeSpecName: "bundle") pod "3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" (UID: "3bc470cf-2bf2-4551-8f7b-85c8d6e3005c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.716438 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-kube-api-access-rqd6k" (OuterVolumeSpecName: "kube-api-access-rqd6k") pod "3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" (UID: "3bc470cf-2bf2-4551-8f7b-85c8d6e3005c"). InnerVolumeSpecName "kube-api-access-rqd6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.724301 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-util" (OuterVolumeSpecName: "util") pod "3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" (UID: "3bc470cf-2bf2-4551-8f7b-85c8d6e3005c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.814548 3556 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.814817 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rqd6k\" (UniqueName: \"kubernetes.io/projected/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-kube-api-access-rqd6k\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:33 crc kubenswrapper[3556]: I1128 00:24:33.814827 3556 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3bc470cf-2bf2-4551-8f7b-85c8d6e3005c-util\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.358692 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" event={"ID":"3bc470cf-2bf2-4551-8f7b-85c8d6e3005c","Type":"ContainerDied","Data":"ee634b98d29a3694e0b460dd834f5678fa3c8029972e0d47add8c0d4131266f7"} Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.358740 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee634b98d29a3694e0b460dd834f5678fa3c8029972e0d47add8c0d4131266f7" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.358742 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.528773 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.622342 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-bundle\") pod \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.622405 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-util\") pod \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.622446 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4pnf\" (UniqueName: \"kubernetes.io/projected/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-kube-api-access-q4pnf\") pod \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\" (UID: \"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e\") " Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.623682 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-bundle" (OuterVolumeSpecName: "bundle") pod "c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" (UID: "c9c2afcd-78bb-4f35-b692-6bb9c4cca46e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.631201 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-kube-api-access-q4pnf" (OuterVolumeSpecName: "kube-api-access-q4pnf") pod "c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" (UID: "c9c2afcd-78bb-4f35-b692-6bb9c4cca46e"). InnerVolumeSpecName "kube-api-access-q4pnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.634027 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-util" (OuterVolumeSpecName: "util") pod "c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" (UID: "c9c2afcd-78bb-4f35-b692-6bb9c4cca46e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.723162 3556 reconciler_common.go:300] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.723193 3556 reconciler_common.go:300] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-util\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:34 crc kubenswrapper[3556]: I1128 00:24:34.723204 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q4pnf\" (UniqueName: \"kubernetes.io/projected/c9c2afcd-78bb-4f35-b692-6bb9c4cca46e-kube-api-access-q4pnf\") on node \"crc\" DevicePath \"\"" Nov 28 00:24:35 crc kubenswrapper[3556]: I1128 00:24:35.366192 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" event={"ID":"c9c2afcd-78bb-4f35-b692-6bb9c4cca46e","Type":"ContainerDied","Data":"20b7cf192e868c2d909085452733aa5ca8586b97c1010c456c7d5a73e5056884"} Nov 28 00:24:35 crc kubenswrapper[3556]: I1128 00:24:35.366521 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20b7cf192e868c2d909085452733aa5ca8586b97c1010c456c7d5a73e5056884" Nov 28 00:24:35 crc kubenswrapper[3556]: I1128 00:24:35.366243 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.397662 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-fcb984879-8lmz9"] Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398088 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff" podNamespace="service-telemetry" podName="elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398295 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f3824391-427a-4382-9971-0a119acc3392" containerName="util" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398309 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3824391-427a-4382-9971-0a119acc3392" containerName="util" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398325 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398334 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398347 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerName="pull" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398355 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerName="pull" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398365 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerName="pull" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398373 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerName="pull" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398389 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f3824391-427a-4382-9971-0a119acc3392" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398398 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3824391-427a-4382-9971-0a119acc3392" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398411 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398419 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398432 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398441 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398456 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerName="pull" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398464 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerName="pull" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398478 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerName="util" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398486 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerName="util" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398500 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerName="util" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398508 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerName="util" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398522 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerName="util" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398531 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerName="util" Nov 28 00:24:38 crc kubenswrapper[3556]: E1128 00:24:38.398543 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="f3824391-427a-4382-9971-0a119acc3392" containerName="pull" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398552 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3824391-427a-4382-9971-0a119acc3392" containerName="pull" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398677 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="996c7ba9-f850-43cf-8cc9-37ed57473f15" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398705 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3824391-427a-4382-9971-0a119acc3392" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398715 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc470cf-2bf2-4551-8f7b-85c8d6e3005c" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.398729 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c2afcd-78bb-4f35-b692-6bb9c4cca46e" containerName="extract" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.399163 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.402197 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.402618 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-ctx5w" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.402814 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.402877 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.456631 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-fcb984879-8lmz9"] Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.561749 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-webhook-cert\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.561820 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xz4j\" (UniqueName: \"kubernetes.io/projected/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-kube-api-access-2xz4j\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.561934 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-apiservice-cert\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.670225 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-webhook-cert\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.670288 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2xz4j\" (UniqueName: \"kubernetes.io/projected/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-kube-api-access-2xz4j\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.670327 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-apiservice-cert\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.685926 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-webhook-cert\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.685987 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-apiservice-cert\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.695665 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xz4j\" (UniqueName: \"kubernetes.io/projected/9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff-kube-api-access-2xz4j\") pod \"elastic-operator-fcb984879-8lmz9\" (UID: \"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff\") " pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:38 crc kubenswrapper[3556]: I1128 00:24:38.722535 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-fcb984879-8lmz9" Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.165661 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-fcb984879-8lmz9"] Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.382453 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-fcb984879-8lmz9" event={"ID":"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff","Type":"ContainerStarted","Data":"a5fc70254e8d56648e38452f4eb8b57156f9c29bd487e9b10df12bb832b9ebfe"} Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.774919 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-lgqg9"] Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.775070 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ce9e867e-df83-4c18-9f61-47fd60b9240d" podNamespace="service-telemetry" podName="interconnect-operator-7b75f466d4-lgqg9" Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.775786 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-7b75f466d4-lgqg9" Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.777372 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-m92jj" Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.785743 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-lgqg9"] Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.887857 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ssm\" (UniqueName: \"kubernetes.io/projected/ce9e867e-df83-4c18-9f61-47fd60b9240d-kube-api-access-56ssm\") pod \"interconnect-operator-7b75f466d4-lgqg9\" (UID: \"ce9e867e-df83-4c18-9f61-47fd60b9240d\") " pod="service-telemetry/interconnect-operator-7b75f466d4-lgqg9" Nov 28 00:24:39 crc kubenswrapper[3556]: I1128 00:24:39.989658 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-56ssm\" (UniqueName: \"kubernetes.io/projected/ce9e867e-df83-4c18-9f61-47fd60b9240d-kube-api-access-56ssm\") pod \"interconnect-operator-7b75f466d4-lgqg9\" (UID: \"ce9e867e-df83-4c18-9f61-47fd60b9240d\") " pod="service-telemetry/interconnect-operator-7b75f466d4-lgqg9" Nov 28 00:24:40 crc kubenswrapper[3556]: I1128 00:24:40.010405 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-56ssm\" (UniqueName: \"kubernetes.io/projected/ce9e867e-df83-4c18-9f61-47fd60b9240d-kube-api-access-56ssm\") pod \"interconnect-operator-7b75f466d4-lgqg9\" (UID: \"ce9e867e-df83-4c18-9f61-47fd60b9240d\") " pod="service-telemetry/interconnect-operator-7b75f466d4-lgqg9" Nov 28 00:24:40 crc kubenswrapper[3556]: I1128 00:24:40.090916 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-7b75f466d4-lgqg9" Nov 28 00:24:40 crc kubenswrapper[3556]: I1128 00:24:40.281800 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-7b75f466d4-lgqg9"] Nov 28 00:24:40 crc kubenswrapper[3556]: W1128 00:24:40.290146 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce9e867e_df83_4c18_9f61_47fd60b9240d.slice/crio-5b6c455133d09f6dd9eaf9336d8b94eb7f26175e7cb91f53c09040ab488739ba WatchSource:0}: Error finding container 5b6c455133d09f6dd9eaf9336d8b94eb7f26175e7cb91f53c09040ab488739ba: Status 404 returned error can't find the container with id 5b6c455133d09f6dd9eaf9336d8b94eb7f26175e7cb91f53c09040ab488739ba Nov 28 00:24:40 crc kubenswrapper[3556]: I1128 00:24:40.386574 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-7b75f466d4-lgqg9" event={"ID":"ce9e867e-df83-4c18-9f61-47fd60b9240d","Type":"ContainerStarted","Data":"5b6c455133d09f6dd9eaf9336d8b94eb7f26175e7cb91f53c09040ab488739ba"} Nov 28 00:24:44 crc kubenswrapper[3556]: I1128 00:24:44.422375 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-fcb984879-8lmz9" event={"ID":"9a4c2482-c7e6-49d1-b8f6-e121a12ba9ff","Type":"ContainerStarted","Data":"e8ea36f52caf7eb8bb71ed5182059dcd1930b84a4fa8ddd0d794dee0f2a1e2bb"} Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.130790 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/elastic-operator-fcb984879-8lmz9" podStartSLOduration=2.98939966 podStartE2EDuration="7.130740322s" podCreationTimestamp="2025-11-28 00:24:38 +0000 UTC" firstStartedPulling="2025-11-28 00:24:39.224594832 +0000 UTC m=+740.816826822" lastFinishedPulling="2025-11-28 00:24:43.365935484 +0000 UTC m=+744.958167484" observedRunningTime="2025-11-28 00:24:44.456340513 +0000 UTC m=+746.048572503" watchObservedRunningTime="2025-11-28 00:24:45.130740322 +0000 UTC m=+746.722972322" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.132338 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.132420 3556 topology_manager.go:215] "Topology Admit Handler" podUID="df285d49-46a0-4b41-8d8b-7493edd5e268" podNamespace="service-telemetry" podName="elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.133275 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.143630 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.144226 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.144395 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.144547 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.144592 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.144768 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-q4vh5" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.145231 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.151460 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.233259 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.247606 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.261475 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.261546 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.261663 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.261699 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.261730 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/df285d49-46a0-4b41-8d8b-7493edd5e268-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.261860 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.261960 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.262001 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.262841 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.262938 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.262991 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.263127 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.263170 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.263273 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.263321 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.364842 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.364918 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.364945 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.364994 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365043 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365082 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365138 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365163 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365203 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365231 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365273 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365295 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/df285d49-46a0-4b41-8d8b-7493edd5e268-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365321 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365366 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.365389 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.369406 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.373201 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.374480 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.375750 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.378123 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.379588 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.379671 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.380294 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.381598 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.384532 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.385462 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.386049 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/df285d49-46a0-4b41-8d8b-7493edd5e268-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.390617 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.393713 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/df285d49-46a0-4b41-8d8b-7493edd5e268-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.394716 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/df285d49-46a0-4b41-8d8b-7493edd5e268-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"df285d49-46a0-4b41-8d8b-7493edd5e268\") " pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.452322 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:24:45 crc kubenswrapper[3556]: I1128 00:24:45.890266 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 28 00:24:46 crc kubenswrapper[3556]: I1128 00:24:46.441248 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"df285d49-46a0-4b41-8d8b-7493edd5e268","Type":"ContainerStarted","Data":"16722fb3111d4b54cc6620c2c18e1dbe0e057ef5ef809f2edf6bb4a088062eb0"} Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.326903 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb"] Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.327048 3556 topology_manager.go:215] "Topology Admit Handler" podUID="41313fd7-b796-4874-bf54-a7bf84b17e2c" podNamespace="cert-manager-operator" podName="cert-manager-operator-controller-manager-5774f55cb7-wwtfb" Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.327668 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb" Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.330994 3556 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-pxsq5" Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.331407 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.331657 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.340783 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb"] Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.489306 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxfpf\" (UniqueName: \"kubernetes.io/projected/41313fd7-b796-4874-bf54-a7bf84b17e2c-kube-api-access-rxfpf\") pod \"cert-manager-operator-controller-manager-5774f55cb7-wwtfb\" (UID: \"41313fd7-b796-4874-bf54-a7bf84b17e2c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb" Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.590235 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rxfpf\" (UniqueName: \"kubernetes.io/projected/41313fd7-b796-4874-bf54-a7bf84b17e2c-kube-api-access-rxfpf\") pod \"cert-manager-operator-controller-manager-5774f55cb7-wwtfb\" (UID: \"41313fd7-b796-4874-bf54-a7bf84b17e2c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb" Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.613296 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxfpf\" (UniqueName: \"kubernetes.io/projected/41313fd7-b796-4874-bf54-a7bf84b17e2c-kube-api-access-rxfpf\") pod \"cert-manager-operator-controller-manager-5774f55cb7-wwtfb\" (UID: \"41313fd7-b796-4874-bf54-a7bf84b17e2c\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb" Nov 28 00:24:47 crc kubenswrapper[3556]: I1128 00:24:47.745511 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.208499 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.208804 3556 topology_manager.go:215] "Topology Admit Handler" podUID="bc793216-a760-4653-9d22-4744eb2ac5b3" podNamespace="openshift-operators" podName="obo-prometheus-operator-864b67f9b9-8jfq9" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.209350 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.211275 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-84c4d" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.213383 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.213415 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.224683 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.327501 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xd4q\" (UniqueName: \"kubernetes.io/projected/bc793216-a760-4653-9d22-4744eb2ac5b3-kube-api-access-2xd4q\") pod \"obo-prometheus-operator-864b67f9b9-8jfq9\" (UID: \"bc793216-a760-4653-9d22-4744eb2ac5b3\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.348442 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.348567 3556 topology_manager.go:215] "Topology Admit Handler" podUID="e3267f68-5450-454b-8ce9-39e0039c4f6f" podNamespace="openshift-operators" podName="obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.349304 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.353909 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-dd45p" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.354202 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.362366 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.362479 3556 topology_manager.go:215] "Topology Admit Handler" podUID="3ff69e08-3c02-49ce-92a7-6a30d3c6191e" podNamespace="openshift-operators" podName="obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.363195 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.380921 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.389834 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.428549 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ff69e08-3c02-49ce-92a7-6a30d3c6191e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k\" (UID: \"3ff69e08-3c02-49ce-92a7-6a30d3c6191e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.428604 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3267f68-5450-454b-8ce9-39e0039c4f6f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2\" (UID: \"e3267f68-5450-454b-8ce9-39e0039c4f6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.428648 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ff69e08-3c02-49ce-92a7-6a30d3c6191e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k\" (UID: \"3ff69e08-3c02-49ce-92a7-6a30d3c6191e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.428680 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3267f68-5450-454b-8ce9-39e0039c4f6f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2\" (UID: \"e3267f68-5450-454b-8ce9-39e0039c4f6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.428712 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2xd4q\" (UniqueName: \"kubernetes.io/projected/bc793216-a760-4653-9d22-4744eb2ac5b3-kube-api-access-2xd4q\") pod \"obo-prometheus-operator-864b67f9b9-8jfq9\" (UID: \"bc793216-a760-4653-9d22-4744eb2ac5b3\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.450163 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xd4q\" (UniqueName: \"kubernetes.io/projected/bc793216-a760-4653-9d22-4744eb2ac5b3-kube-api-access-2xd4q\") pod \"obo-prometheus-operator-864b67f9b9-8jfq9\" (UID: \"bc793216-a760-4653-9d22-4744eb2ac5b3\") " pod="openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.495563 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-65df589ff7-dmlxl"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.495960 3556 topology_manager.go:215] "Topology Admit Handler" podUID="02a59992-a6d8-4bb1-b714-9c47f7af71f8" podNamespace="openshift-operators" podName="observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.496588 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.502380 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-vd699" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.502699 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.513762 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-65df589ff7-dmlxl"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.526286 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.529569 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ff69e08-3c02-49ce-92a7-6a30d3c6191e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k\" (UID: \"3ff69e08-3c02-49ce-92a7-6a30d3c6191e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.529625 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3267f68-5450-454b-8ce9-39e0039c4f6f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2\" (UID: \"e3267f68-5450-454b-8ce9-39e0039c4f6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.529676 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ff69e08-3c02-49ce-92a7-6a30d3c6191e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k\" (UID: \"3ff69e08-3c02-49ce-92a7-6a30d3c6191e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.529702 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3267f68-5450-454b-8ce9-39e0039c4f6f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2\" (UID: \"e3267f68-5450-454b-8ce9-39e0039c4f6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.539562 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ff69e08-3c02-49ce-92a7-6a30d3c6191e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k\" (UID: \"3ff69e08-3c02-49ce-92a7-6a30d3c6191e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.542690 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3267f68-5450-454b-8ce9-39e0039c4f6f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2\" (UID: \"e3267f68-5450-454b-8ce9-39e0039c4f6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.544547 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3267f68-5450-454b-8ce9-39e0039c4f6f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2\" (UID: \"e3267f68-5450-454b-8ce9-39e0039c4f6f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.559562 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ff69e08-3c02-49ce-92a7-6a30d3c6191e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k\" (UID: \"3ff69e08-3c02-49ce-92a7-6a30d3c6191e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.589390 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-gdfw7"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.589501 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4" podNamespace="openshift-operators" podName="perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.593304 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.605002 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-wlqk2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.630452 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/02a59992-a6d8-4bb1-b714-9c47f7af71f8-observability-operator-tls\") pod \"observability-operator-65df589ff7-dmlxl\" (UID: \"02a59992-a6d8-4bb1-b714-9c47f7af71f8\") " pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.630541 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5qh\" (UniqueName: \"kubernetes.io/projected/02a59992-a6d8-4bb1-b714-9c47f7af71f8-kube-api-access-wh5qh\") pod \"observability-operator-65df589ff7-dmlxl\" (UID: \"02a59992-a6d8-4bb1-b714-9c47f7af71f8\") " pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.656379 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-gdfw7"] Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.682235 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.692266 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.732997 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w66vj\" (UniqueName: \"kubernetes.io/projected/9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4-kube-api-access-w66vj\") pod \"perses-operator-574fd8d65d-gdfw7\" (UID: \"9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4\") " pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.733071 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4-openshift-service-ca\") pod \"perses-operator-574fd8d65d-gdfw7\" (UID: \"9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4\") " pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.733105 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/02a59992-a6d8-4bb1-b714-9c47f7af71f8-observability-operator-tls\") pod \"observability-operator-65df589ff7-dmlxl\" (UID: \"02a59992-a6d8-4bb1-b714-9c47f7af71f8\") " pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.733129 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wh5qh\" (UniqueName: \"kubernetes.io/projected/02a59992-a6d8-4bb1-b714-9c47f7af71f8-kube-api-access-wh5qh\") pod \"observability-operator-65df589ff7-dmlxl\" (UID: \"02a59992-a6d8-4bb1-b714-9c47f7af71f8\") " pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.742744 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/02a59992-a6d8-4bb1-b714-9c47f7af71f8-observability-operator-tls\") pod \"observability-operator-65df589ff7-dmlxl\" (UID: \"02a59992-a6d8-4bb1-b714-9c47f7af71f8\") " pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.761734 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh5qh\" (UniqueName: \"kubernetes.io/projected/02a59992-a6d8-4bb1-b714-9c47f7af71f8-kube-api-access-wh5qh\") pod \"observability-operator-65df589ff7-dmlxl\" (UID: \"02a59992-a6d8-4bb1-b714-9c47f7af71f8\") " pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.821397 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.833780 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w66vj\" (UniqueName: \"kubernetes.io/projected/9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4-kube-api-access-w66vj\") pod \"perses-operator-574fd8d65d-gdfw7\" (UID: \"9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4\") " pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.833864 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4-openshift-service-ca\") pod \"perses-operator-574fd8d65d-gdfw7\" (UID: \"9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4\") " pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.834701 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4-openshift-service-ca\") pod \"perses-operator-574fd8d65d-gdfw7\" (UID: \"9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4\") " pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.853767 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-w66vj\" (UniqueName: \"kubernetes.io/projected/9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4-kube-api-access-w66vj\") pod \"perses-operator-574fd8d65d-gdfw7\" (UID: \"9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4\") " pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:50 crc kubenswrapper[3556]: I1128 00:24:50.907339 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.392132 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-65df589ff7-dmlxl"] Nov 28 00:24:53 crc kubenswrapper[3556]: W1128 00:24:53.406947 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02a59992_a6d8_4bb1_b714_9c47f7af71f8.slice/crio-5ee548a49107a59d0beb33fe5ad1af7ee6c082bf4913fd400c31891f9d5dfc96 WatchSource:0}: Error finding container 5ee548a49107a59d0beb33fe5ad1af7ee6c082bf4913fd400c31891f9d5dfc96: Status 404 returned error can't find the container with id 5ee548a49107a59d0beb33fe5ad1af7ee6c082bf4913fd400c31891f9d5dfc96 Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.410319 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2"] Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.414059 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-574fd8d65d-gdfw7"] Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.486316 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" event={"ID":"9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4","Type":"ContainerStarted","Data":"990b2ed2e205a00e7734fa16f0acd4b44a4b58a97d7cb95905ff49495717fa95"} Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.487423 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" event={"ID":"e3267f68-5450-454b-8ce9-39e0039c4f6f","Type":"ContainerStarted","Data":"edd13cc2b09959f795de47394c60304c6d1873c508d69183bafa529b40859a69"} Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.489687 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-65df589ff7-dmlxl" event={"ID":"02a59992-a6d8-4bb1-b714-9c47f7af71f8","Type":"ContainerStarted","Data":"5ee548a49107a59d0beb33fe5ad1af7ee6c082bf4913fd400c31891f9d5dfc96"} Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.495914 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-7b75f466d4-lgqg9" event={"ID":"ce9e867e-df83-4c18-9f61-47fd60b9240d","Type":"ContainerStarted","Data":"790e5ffb78bfb14f7ddcf91e700e46936ed46a4858e4f8d11dae44dff1a52181"} Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.513727 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-7b75f466d4-lgqg9" podStartSLOduration=1.812807599 podStartE2EDuration="14.513683054s" podCreationTimestamp="2025-11-28 00:24:39 +0000 UTC" firstStartedPulling="2025-11-28 00:24:40.291885725 +0000 UTC m=+741.884117725" lastFinishedPulling="2025-11-28 00:24:52.9927612 +0000 UTC m=+754.584993180" observedRunningTime="2025-11-28 00:24:53.513573691 +0000 UTC m=+755.105805691" watchObservedRunningTime="2025-11-28 00:24:53.513683054 +0000 UTC m=+755.105915054" Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.562636 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9"] Nov 28 00:24:53 crc kubenswrapper[3556]: W1128 00:24:53.563216 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc793216_a760_4653_9d22_4744eb2ac5b3.slice/crio-c0fee15e093a7907637c1d6e04a96cb924f485b76f03066cb4b84421dc06083d WatchSource:0}: Error finding container c0fee15e093a7907637c1d6e04a96cb924f485b76f03066cb4b84421dc06083d: Status 404 returned error can't find the container with id c0fee15e093a7907637c1d6e04a96cb924f485b76f03066cb4b84421dc06083d Nov 28 00:24:53 crc kubenswrapper[3556]: W1128 00:24:53.565179 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41313fd7_b796_4874_bf54_a7bf84b17e2c.slice/crio-6eccfcfc31b6f9da32f6389b43b181fe4dfca259458eefc1c607a36e85b80c9b WatchSource:0}: Error finding container 6eccfcfc31b6f9da32f6389b43b181fe4dfca259458eefc1c607a36e85b80c9b: Status 404 returned error can't find the container with id 6eccfcfc31b6f9da32f6389b43b181fe4dfca259458eefc1c607a36e85b80c9b Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.567380 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb"] Nov 28 00:24:53 crc kubenswrapper[3556]: I1128 00:24:53.571547 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k"] Nov 28 00:24:54 crc kubenswrapper[3556]: I1128 00:24:54.516947 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" event={"ID":"3ff69e08-3c02-49ce-92a7-6a30d3c6191e","Type":"ContainerStarted","Data":"f22406070aae42e0fa27a7670029196e9b0ef045b50c543b7469836f001c8a55"} Nov 28 00:24:54 crc kubenswrapper[3556]: I1128 00:24:54.523387 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9" event={"ID":"bc793216-a760-4653-9d22-4744eb2ac5b3","Type":"ContainerStarted","Data":"c0fee15e093a7907637c1d6e04a96cb924f485b76f03066cb4b84421dc06083d"} Nov 28 00:24:54 crc kubenswrapper[3556]: I1128 00:24:54.525665 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb" event={"ID":"41313fd7-b796-4874-bf54-a7bf84b17e2c","Type":"ContainerStarted","Data":"6eccfcfc31b6f9da32f6389b43b181fe4dfca259458eefc1c607a36e85b80c9b"} Nov 28 00:25:18 crc kubenswrapper[3556]: I1128 00:25:18.699332 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:25:18 crc kubenswrapper[3556]: I1128 00:25:18.699943 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:25:18 crc kubenswrapper[3556]: I1128 00:25:18.699976 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:25:18 crc kubenswrapper[3556]: I1128 00:25:18.700031 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:25:18 crc kubenswrapper[3556]: I1128 00:25:18.700047 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.278150 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.278444 3556 topology_manager.go:215] "Topology Admit Handler" podUID="6d828f9a-53d6-40fd-a89c-95441345a8c6" podNamespace="service-telemetry" podName="service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.279330 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.281098 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ps7tk" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.281379 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.281620 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.284421 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.298215 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458055 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458182 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458254 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458285 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458339 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458370 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458438 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458487 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj9xj\" (UniqueName: \"kubernetes.io/projected/6d828f9a-53d6-40fd-a89c-95441345a8c6-kube-api-access-lj9xj\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458523 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458585 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458624 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.458668 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559584 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lj9xj\" (UniqueName: \"kubernetes.io/projected/6d828f9a-53d6-40fd-a89c-95441345a8c6-kube-api-access-lj9xj\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559642 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559684 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559713 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559735 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559757 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559776 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559797 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559818 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559825 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559850 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559871 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.559898 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.561266 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.561513 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.561563 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.561898 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.562152 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.562165 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.562290 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.562527 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.566766 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.587564 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-push\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.590679 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj9xj\" (UniqueName: \"kubernetes.io/projected/6d828f9a-53d6-40fd-a89c-95441345a8c6-kube-api-access-lj9xj\") pod \"service-telemetry-operator-1-build\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:19 crc kubenswrapper[3556]: I1128 00:25:19.592271 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:22 crc kubenswrapper[3556]: I1128 00:25:22.664461 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:25:22 crc kubenswrapper[3556]: I1128 00:25:22.664541 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.098565 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 28 00:25:28 crc kubenswrapper[3556]: W1128 00:25:28.121726 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d828f9a_53d6_40fd_a89c_95441345a8c6.slice/crio-55ecddbb5b5d733c11509c5c349b18d18238b08ebda8cd656a7f54f14b0ecc7b WatchSource:0}: Error finding container 55ecddbb5b5d733c11509c5c349b18d18238b08ebda8cd656a7f54f14b0ecc7b: Status 404 returned error can't find the container with id 55ecddbb5b5d733c11509c5c349b18d18238b08ebda8cd656a7f54f14b0ecc7b Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.716418 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" event={"ID":"3ff69e08-3c02-49ce-92a7-6a30d3c6191e","Type":"ContainerStarted","Data":"38724ed1c1a9706a8e9c39a2e7565f5ca372833492fef10464485735471d4962"} Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.718093 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" event={"ID":"e3267f68-5450-454b-8ce9-39e0039c4f6f","Type":"ContainerStarted","Data":"71f1fad80efa30ef11c7b105ad250060cf80a5c00e42f68bd289b0c47acaed69"} Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.719838 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-65df589ff7-dmlxl" event={"ID":"02a59992-a6d8-4bb1-b714-9c47f7af71f8","Type":"ContainerStarted","Data":"8b26b110cc6c5f0edc63c2d844d92a4d9e38bce350f0498889a15b6c82240007"} Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.721257 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb" event={"ID":"41313fd7-b796-4874-bf54-a7bf84b17e2c","Type":"ContainerStarted","Data":"10b90297ede32e6262addcfc67695732fa5770665a087b217bdd76640a275981"} Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.722431 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"6d828f9a-53d6-40fd-a89c-95441345a8c6","Type":"ContainerStarted","Data":"55ecddbb5b5d733c11509c5c349b18d18238b08ebda8cd656a7f54f14b0ecc7b"} Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.723569 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" event={"ID":"9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4","Type":"ContainerStarted","Data":"6cd453ea440162636abf1417ae4203765df0b4b869c47fef1b7a852c733c9418"} Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.723993 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.742771 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k" podStartSLOduration=4.6596149350000005 podStartE2EDuration="38.742711029s" podCreationTimestamp="2025-11-28 00:24:50 +0000 UTC" firstStartedPulling="2025-11-28 00:24:53.587737915 +0000 UTC m=+755.179969905" lastFinishedPulling="2025-11-28 00:25:27.670834009 +0000 UTC m=+789.263065999" observedRunningTime="2025-11-28 00:25:28.741760397 +0000 UTC m=+790.333992397" watchObservedRunningTime="2025-11-28 00:25:28.742711029 +0000 UTC m=+790.334943029" Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.762897 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/observability-operator-65df589ff7-dmlxl" podStartSLOduration=4.367179944 podStartE2EDuration="38.762859639s" podCreationTimestamp="2025-11-28 00:24:50 +0000 UTC" firstStartedPulling="2025-11-28 00:24:53.409404305 +0000 UTC m=+755.001636295" lastFinishedPulling="2025-11-28 00:25:27.805084 +0000 UTC m=+789.397315990" observedRunningTime="2025-11-28 00:25:28.761678792 +0000 UTC m=+790.353910782" watchObservedRunningTime="2025-11-28 00:25:28.762859639 +0000 UTC m=+790.355091639" Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.782870 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" podStartSLOduration=4.401898895 podStartE2EDuration="38.782832776s" podCreationTimestamp="2025-11-28 00:24:50 +0000 UTC" firstStartedPulling="2025-11-28 00:24:53.425303006 +0000 UTC m=+755.017534996" lastFinishedPulling="2025-11-28 00:25:27.806236877 +0000 UTC m=+789.398468877" observedRunningTime="2025-11-28 00:25:28.781365782 +0000 UTC m=+790.373597772" watchObservedRunningTime="2025-11-28 00:25:28.782832776 +0000 UTC m=+790.375064766" Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.810531 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5774f55cb7-wwtfb" podStartSLOduration=7.700671505 podStartE2EDuration="41.810462928s" podCreationTimestamp="2025-11-28 00:24:47 +0000 UTC" firstStartedPulling="2025-11-28 00:24:53.578005284 +0000 UTC m=+755.170237274" lastFinishedPulling="2025-11-28 00:25:27.687796707 +0000 UTC m=+789.280028697" observedRunningTime="2025-11-28 00:25:28.810181662 +0000 UTC m=+790.402413652" watchObservedRunningTime="2025-11-28 00:25:28.810462928 +0000 UTC m=+790.402694918" Nov 28 00:25:28 crc kubenswrapper[3556]: I1128 00:25:28.846048 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2" podStartSLOduration=4.338596869 podStartE2EDuration="38.84597433s" podCreationTimestamp="2025-11-28 00:24:50 +0000 UTC" firstStartedPulling="2025-11-28 00:24:53.417329445 +0000 UTC m=+755.009561435" lastFinishedPulling="2025-11-28 00:25:27.924706916 +0000 UTC m=+789.516938896" observedRunningTime="2025-11-28 00:25:28.843142785 +0000 UTC m=+790.435374785" watchObservedRunningTime="2025-11-28 00:25:28.84597433 +0000 UTC m=+790.438206340" Nov 28 00:25:29 crc kubenswrapper[3556]: I1128 00:25:29.463753 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 28 00:25:29 crc kubenswrapper[3556]: I1128 00:25:29.731871 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9" event={"ID":"bc793216-a760-4653-9d22-4744eb2ac5b3","Type":"ContainerStarted","Data":"b613e011598e2393bb382ec060ec574242e64947e2370609dccdab5c919a80db"} Nov 28 00:25:29 crc kubenswrapper[3556]: I1128 00:25:29.734446 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"df285d49-46a0-4b41-8d8b-7493edd5e268","Type":"ContainerStarted","Data":"b4c4fa55e130915ce6f28bc3323a222fe3ad1e4564e44a22a872ce981d2e361c"} Nov 28 00:25:29 crc kubenswrapper[3556]: I1128 00:25:29.734508 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:25:29 crc kubenswrapper[3556]: I1128 00:25:29.736840 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-65df589ff7-dmlxl" Nov 28 00:25:29 crc kubenswrapper[3556]: I1128 00:25:29.756908 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-864b67f9b9-8jfq9" podStartSLOduration=5.650886682 podStartE2EDuration="39.756858777s" podCreationTimestamp="2025-11-28 00:24:50 +0000 UTC" firstStartedPulling="2025-11-28 00:24:53.578630179 +0000 UTC m=+755.170862169" lastFinishedPulling="2025-11-28 00:25:27.684602254 +0000 UTC m=+789.276834264" observedRunningTime="2025-11-28 00:25:29.756297554 +0000 UTC m=+791.348529554" watchObservedRunningTime="2025-11-28 00:25:29.756858777 +0000 UTC m=+791.349090777" Nov 28 00:25:30 crc kubenswrapper[3556]: I1128 00:25:30.124818 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 28 00:25:30 crc kubenswrapper[3556]: I1128 00:25:30.163934 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.167145 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.167253 3556 topology_manager.go:215] "Topology Admit Handler" podUID="62c48da3-a94b-494b-aee7-29345ef503fd" podNamespace="service-telemetry" podName="service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.168271 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.170445 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.170861 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.171172 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.246708 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.311924 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.311988 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312108 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdsdt\" (UniqueName: \"kubernetes.io/projected/62c48da3-a94b-494b-aee7-29345ef503fd-kube-api-access-cdsdt\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312148 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312181 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312293 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312335 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312363 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312421 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312463 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312532 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.312567 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413259 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413325 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413385 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cdsdt\" (UniqueName: \"kubernetes.io/projected/62c48da3-a94b-494b-aee7-29345ef503fd-kube-api-access-cdsdt\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413417 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413446 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413577 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413648 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413691 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413731 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413733 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413803 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413805 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413808 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413886 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.413916 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.414038 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.414120 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.414157 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.414335 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.414538 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.414580 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.427904 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-push\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.432501 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.433935 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdsdt\" (UniqueName: \"kubernetes.io/projected/62c48da3-a94b-494b-aee7-29345ef503fd-kube-api-access-cdsdt\") pod \"service-telemetry-operator-2-build\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.484872 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.754747 3556 generic.go:334] "Generic (PLEG): container finished" podID="df285d49-46a0-4b41-8d8b-7493edd5e268" containerID="b4c4fa55e130915ce6f28bc3323a222fe3ad1e4564e44a22a872ce981d2e361c" exitCode=0 Nov 28 00:25:31 crc kubenswrapper[3556]: I1128 00:25:31.756483 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"df285d49-46a0-4b41-8d8b-7493edd5e268","Type":"ContainerDied","Data":"b4c4fa55e130915ce6f28bc3323a222fe3ad1e4564e44a22a872ce981d2e361c"} Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.283432 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-4dq5m"] Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.284533 3556 topology_manager.go:215] "Topology Admit Handler" podUID="eeef082f-da5f-460c-bd45-41d7602f97ef" podNamespace="cert-manager" podName="cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.285300 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.289784 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.291343 3556 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6nq9l" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.293765 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.310352 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-4dq5m"] Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.425705 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584mv\" (UniqueName: \"kubernetes.io/projected/eeef082f-da5f-460c-bd45-41d7602f97ef-kube-api-access-584mv\") pod \"cert-manager-webhook-58ffc98b58-4dq5m\" (UID: \"eeef082f-da5f-460c-bd45-41d7602f97ef\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.425784 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeef082f-da5f-460c-bd45-41d7602f97ef-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-4dq5m\" (UID: \"eeef082f-da5f-460c-bd45-41d7602f97ef\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.526850 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-584mv\" (UniqueName: \"kubernetes.io/projected/eeef082f-da5f-460c-bd45-41d7602f97ef-kube-api-access-584mv\") pod \"cert-manager-webhook-58ffc98b58-4dq5m\" (UID: \"eeef082f-da5f-460c-bd45-41d7602f97ef\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.526924 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeef082f-da5f-460c-bd45-41d7602f97ef-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-4dq5m\" (UID: \"eeef082f-da5f-460c-bd45-41d7602f97ef\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.550675 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeef082f-da5f-460c-bd45-41d7602f97ef-bound-sa-token\") pod \"cert-manager-webhook-58ffc98b58-4dq5m\" (UID: \"eeef082f-da5f-460c-bd45-41d7602f97ef\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.556204 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-584mv\" (UniqueName: \"kubernetes.io/projected/eeef082f-da5f-460c-bd45-41d7602f97ef-kube-api-access-584mv\") pod \"cert-manager-webhook-58ffc98b58-4dq5m\" (UID: \"eeef082f-da5f-460c-bd45-41d7602f97ef\") " pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:32 crc kubenswrapper[3556]: I1128 00:25:32.601229 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:34 crc kubenswrapper[3556]: I1128 00:25:34.961980 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-58ffc98b58-4dq5m"] Nov 28 00:25:35 crc kubenswrapper[3556]: I1128 00:25:35.216892 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Nov 28 00:25:35 crc kubenswrapper[3556]: W1128 00:25:35.259976 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62c48da3_a94b_494b_aee7_29345ef503fd.slice/crio-22e30e913c85137f85dc5f38a2f13eb8709e132508efcfe38e35cb9888988d6d WatchSource:0}: Error finding container 22e30e913c85137f85dc5f38a2f13eb8709e132508efcfe38e35cb9888988d6d: Status 404 returned error can't find the container with id 22e30e913c85137f85dc5f38a2f13eb8709e132508efcfe38e35cb9888988d6d Nov 28 00:25:35 crc kubenswrapper[3556]: I1128 00:25:35.782188 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"62c48da3-a94b-494b-aee7-29345ef503fd","Type":"ContainerStarted","Data":"e5f0cae60fa895ca609331cb5ac1e8224790f738fee7275743ab6fa2a43e67f4"} Nov 28 00:25:35 crc kubenswrapper[3556]: I1128 00:25:35.782511 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"62c48da3-a94b-494b-aee7-29345ef503fd","Type":"ContainerStarted","Data":"22e30e913c85137f85dc5f38a2f13eb8709e132508efcfe38e35cb9888988d6d"} Nov 28 00:25:35 crc kubenswrapper[3556]: I1128 00:25:35.784146 3556 generic.go:334] "Generic (PLEG): container finished" podID="df285d49-46a0-4b41-8d8b-7493edd5e268" containerID="1f5df7e1ce5354df9aa1567329d8c132e13628813e87a0a31c9cbaf5e2c5b2b1" exitCode=0 Nov 28 00:25:35 crc kubenswrapper[3556]: I1128 00:25:35.784196 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"df285d49-46a0-4b41-8d8b-7493edd5e268","Type":"ContainerDied","Data":"1f5df7e1ce5354df9aa1567329d8c132e13628813e87a0a31c9cbaf5e2c5b2b1"} Nov 28 00:25:35 crc kubenswrapper[3556]: I1128 00:25:35.787161 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"6d828f9a-53d6-40fd-a89c-95441345a8c6","Type":"ContainerStarted","Data":"b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f"} Nov 28 00:25:35 crc kubenswrapper[3556]: I1128 00:25:35.787269 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="6d828f9a-53d6-40fd-a89c-95441345a8c6" containerName="manage-dockerfile" containerID="cri-o://b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f" gracePeriod=30 Nov 28 00:25:35 crc kubenswrapper[3556]: I1128 00:25:35.798336 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" event={"ID":"eeef082f-da5f-460c-bd45-41d7602f97ef","Type":"ContainerStarted","Data":"4f20a9c14b473a04327c41ce9764068566e3fe54bd15b1895d7b032b748f8821"} Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.053849 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d"] Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.053982 3556 topology_manager.go:215] "Topology Admit Handler" podUID="d7520d61-bf39-4dc2-a2a7-1d23584f20f7" podNamespace="cert-manager" podName="cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.054751 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.056664 3556 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-lbslf" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.062078 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d"] Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.176439 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4gcv\" (UniqueName: \"kubernetes.io/projected/d7520d61-bf39-4dc2-a2a7-1d23584f20f7-kube-api-access-g4gcv\") pod \"cert-manager-cainjector-6dcc74f67d-2k68d\" (UID: \"d7520d61-bf39-4dc2-a2a7-1d23584f20f7\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.176497 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7520d61-bf39-4dc2-a2a7-1d23584f20f7-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-2k68d\" (UID: \"d7520d61-bf39-4dc2-a2a7-1d23584f20f7\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.267127 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_6d828f9a-53d6-40fd-a89c-95441345a8c6/manage-dockerfile/0.log" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.267182 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.277782 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-g4gcv\" (UniqueName: \"kubernetes.io/projected/d7520d61-bf39-4dc2-a2a7-1d23584f20f7-kube-api-access-g4gcv\") pod \"cert-manager-cainjector-6dcc74f67d-2k68d\" (UID: \"d7520d61-bf39-4dc2-a2a7-1d23584f20f7\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.277852 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7520d61-bf39-4dc2-a2a7-1d23584f20f7-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-2k68d\" (UID: \"d7520d61-bf39-4dc2-a2a7-1d23584f20f7\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.299128 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7520d61-bf39-4dc2-a2a7-1d23584f20f7-bound-sa-token\") pod \"cert-manager-cainjector-6dcc74f67d-2k68d\" (UID: \"d7520d61-bf39-4dc2-a2a7-1d23584f20f7\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.300917 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4gcv\" (UniqueName: \"kubernetes.io/projected/d7520d61-bf39-4dc2-a2a7-1d23584f20f7-kube-api-access-g4gcv\") pod \"cert-manager-cainjector-6dcc74f67d-2k68d\" (UID: \"d7520d61-bf39-4dc2-a2a7-1d23584f20f7\") " pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379179 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-ca-bundles\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379229 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-node-pullsecrets\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379262 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-proxy-ca-bundles\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379299 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-run\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379329 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-root\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379357 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj9xj\" (UniqueName: \"kubernetes.io/projected/6d828f9a-53d6-40fd-a89c-95441345a8c6-kube-api-access-lj9xj\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379375 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379403 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-push\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379538 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildcachedir\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379593 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-pull\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379655 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-blob-cache\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379685 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildworkdir\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.379721 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-system-configs\") pod \"6d828f9a-53d6-40fd-a89c-95441345a8c6\" (UID: \"6d828f9a-53d6-40fd-a89c-95441345a8c6\") " Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380073 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380137 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380152 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380372 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380401 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380416 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380446 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380666 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.380679 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.382413 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.382851 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.383126 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d828f9a-53d6-40fd-a89c-95441345a8c6-kube-api-access-lj9xj" (OuterVolumeSpecName: "kube-api-access-lj9xj") pod "6d828f9a-53d6-40fd-a89c-95441345a8c6" (UID: "6d828f9a-53d6-40fd-a89c-95441345a8c6"). InnerVolumeSpecName "kube-api-access-lj9xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.410574 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481447 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481573 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481590 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481603 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481618 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481653 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481666 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d828f9a-53d6-40fd-a89c-95441345a8c6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481679 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481712 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6d828f9a-53d6-40fd-a89c-95441345a8c6-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481725 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lj9xj\" (UniqueName: \"kubernetes.io/projected/6d828f9a-53d6-40fd-a89c-95441345a8c6-kube-api-access-lj9xj\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.481742 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/6d828f9a-53d6-40fd-a89c-95441345a8c6-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.805076 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_6d828f9a-53d6-40fd-a89c-95441345a8c6/manage-dockerfile/0.log" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.805143 3556 generic.go:334] "Generic (PLEG): container finished" podID="6d828f9a-53d6-40fd-a89c-95441345a8c6" containerID="b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f" exitCode=1 Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.805226 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"6d828f9a-53d6-40fd-a89c-95441345a8c6","Type":"ContainerDied","Data":"b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f"} Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.805246 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"6d828f9a-53d6-40fd-a89c-95441345a8c6","Type":"ContainerDied","Data":"55ecddbb5b5d733c11509c5c349b18d18238b08ebda8cd656a7f54f14b0ecc7b"} Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.805285 3556 scope.go:117] "RemoveContainer" containerID="b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.805309 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.811169 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"df285d49-46a0-4b41-8d8b-7493edd5e268","Type":"ContainerStarted","Data":"c3ff0dc12ad2af5d7b83906729bf8be7a0b9deba06ccd996ea3e74dd5370fc88"} Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.811242 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.848092 3556 scope.go:117] "RemoveContainer" containerID="b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f" Nov 28 00:25:36 crc kubenswrapper[3556]: E1128 00:25:36.848486 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f\": container with ID starting with b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f not found: ID does not exist" containerID="b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.848521 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f"} err="failed to get container status \"b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f\": rpc error: code = NotFound desc = could not find container \"b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f\": container with ID starting with b534a5293a66e72f3c39d6ada333fe99fbf07a481e9ec1183dc02bef2e30686f not found: ID does not exist" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.874386 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=9.59551749 podStartE2EDuration="51.874341329s" podCreationTimestamp="2025-11-28 00:24:45 +0000 UTC" firstStartedPulling="2025-11-28 00:24:45.93755919 +0000 UTC m=+747.529791180" lastFinishedPulling="2025-11-28 00:25:28.216383019 +0000 UTC m=+789.808615019" observedRunningTime="2025-11-28 00:25:36.87174496 +0000 UTC m=+798.463976960" watchObservedRunningTime="2025-11-28 00:25:36.874341329 +0000 UTC m=+798.466573319" Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.978563 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d"] Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.978820 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 28 00:25:36 crc kubenswrapper[3556]: I1128 00:25:36.978835 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Nov 28 00:25:37 crc kubenswrapper[3556]: I1128 00:25:37.815425 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" event={"ID":"d7520d61-bf39-4dc2-a2a7-1d23584f20f7","Type":"ContainerStarted","Data":"67762fe8a3c34378ebe2b39e38af7f4d44a241caf17806277cd2b251fefe30c0"} Nov 28 00:25:38 crc kubenswrapper[3556]: I1128 00:25:38.926309 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d828f9a-53d6-40fd-a89c-95441345a8c6" path="/var/lib/kubelet/pods/6d828f9a-53d6-40fd-a89c-95441345a8c6/volumes" Nov 28 00:25:40 crc kubenswrapper[3556]: I1128 00:25:40.909291 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-574fd8d65d-gdfw7" Nov 28 00:25:41 crc kubenswrapper[3556]: I1128 00:25:41.844163 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" event={"ID":"d7520d61-bf39-4dc2-a2a7-1d23584f20f7","Type":"ContainerStarted","Data":"2bdbfeaf36c8126732f01d2b1765faba8bc44dd50124ccfad3d8fd02efbd3283"} Nov 28 00:25:41 crc kubenswrapper[3556]: I1128 00:25:41.845802 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" event={"ID":"eeef082f-da5f-460c-bd45-41d7602f97ef","Type":"ContainerStarted","Data":"7dba28e80fdbbc8b2a22dd5d5d3947d640e534ca4fb24d78e860ef56a474a0a9"} Nov 28 00:25:41 crc kubenswrapper[3556]: I1128 00:25:41.845942 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:41 crc kubenswrapper[3556]: I1128 00:25:41.869516 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-6dcc74f67d-2k68d" podStartSLOduration=1.9037254369999999 podStartE2EDuration="5.869475353s" podCreationTimestamp="2025-11-28 00:25:36 +0000 UTC" firstStartedPulling="2025-11-28 00:25:36.977454808 +0000 UTC m=+798.569686798" lastFinishedPulling="2025-11-28 00:25:40.943204724 +0000 UTC m=+802.535436714" observedRunningTime="2025-11-28 00:25:41.864936119 +0000 UTC m=+803.457168109" watchObservedRunningTime="2025-11-28 00:25:41.869475353 +0000 UTC m=+803.461707353" Nov 28 00:25:41 crc kubenswrapper[3556]: I1128 00:25:41.901728 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" podStartSLOduration=4.251383208 podStartE2EDuration="9.901682429s" podCreationTimestamp="2025-11-28 00:25:32 +0000 UTC" firstStartedPulling="2025-11-28 00:25:34.993660539 +0000 UTC m=+796.585892529" lastFinishedPulling="2025-11-28 00:25:40.64395976 +0000 UTC m=+802.236191750" observedRunningTime="2025-11-28 00:25:41.899346296 +0000 UTC m=+803.491578286" watchObservedRunningTime="2025-11-28 00:25:41.901682429 +0000 UTC m=+803.493914429" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.756674 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-755d7666d5-kjtgj"] Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.757128 3556 topology_manager.go:215] "Topology Admit Handler" podUID="c5833d33-da6c-4528-8318-b5778f1cc080" podNamespace="cert-manager" podName="cert-manager-755d7666d5-kjtgj" Nov 28 00:25:43 crc kubenswrapper[3556]: E1128 00:25:43.757298 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6d828f9a-53d6-40fd-a89c-95441345a8c6" containerName="manage-dockerfile" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.757312 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d828f9a-53d6-40fd-a89c-95441345a8c6" containerName="manage-dockerfile" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.757457 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d828f9a-53d6-40fd-a89c-95441345a8c6" containerName="manage-dockerfile" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.757965 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-755d7666d5-kjtgj" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.760600 3556 reflector.go:351] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lbb87" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.770127 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-755d7666d5-kjtgj"] Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.865672 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9qv\" (UniqueName: \"kubernetes.io/projected/c5833d33-da6c-4528-8318-b5778f1cc080-kube-api-access-rb9qv\") pod \"cert-manager-755d7666d5-kjtgj\" (UID: \"c5833d33-da6c-4528-8318-b5778f1cc080\") " pod="cert-manager/cert-manager-755d7666d5-kjtgj" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.865867 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5833d33-da6c-4528-8318-b5778f1cc080-bound-sa-token\") pod \"cert-manager-755d7666d5-kjtgj\" (UID: \"c5833d33-da6c-4528-8318-b5778f1cc080\") " pod="cert-manager/cert-manager-755d7666d5-kjtgj" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.966599 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5833d33-da6c-4528-8318-b5778f1cc080-bound-sa-token\") pod \"cert-manager-755d7666d5-kjtgj\" (UID: \"c5833d33-da6c-4528-8318-b5778f1cc080\") " pod="cert-manager/cert-manager-755d7666d5-kjtgj" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.966974 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rb9qv\" (UniqueName: \"kubernetes.io/projected/c5833d33-da6c-4528-8318-b5778f1cc080-kube-api-access-rb9qv\") pod \"cert-manager-755d7666d5-kjtgj\" (UID: \"c5833d33-da6c-4528-8318-b5778f1cc080\") " pod="cert-manager/cert-manager-755d7666d5-kjtgj" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.990670 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5833d33-da6c-4528-8318-b5778f1cc080-bound-sa-token\") pod \"cert-manager-755d7666d5-kjtgj\" (UID: \"c5833d33-da6c-4528-8318-b5778f1cc080\") " pod="cert-manager/cert-manager-755d7666d5-kjtgj" Nov 28 00:25:43 crc kubenswrapper[3556]: I1128 00:25:43.998067 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb9qv\" (UniqueName: \"kubernetes.io/projected/c5833d33-da6c-4528-8318-b5778f1cc080-kube-api-access-rb9qv\") pod \"cert-manager-755d7666d5-kjtgj\" (UID: \"c5833d33-da6c-4528-8318-b5778f1cc080\") " pod="cert-manager/cert-manager-755d7666d5-kjtgj" Nov 28 00:25:44 crc kubenswrapper[3556]: I1128 00:25:44.085506 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-755d7666d5-kjtgj" Nov 28 00:25:44 crc kubenswrapper[3556]: W1128 00:25:44.473056 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5833d33_da6c_4528_8318_b5778f1cc080.slice/crio-cc8a5b6da076ad6d5798af703aa53f2dd32b3b84df17b5104318d2df34f25b37 WatchSource:0}: Error finding container cc8a5b6da076ad6d5798af703aa53f2dd32b3b84df17b5104318d2df34f25b37: Status 404 returned error can't find the container with id cc8a5b6da076ad6d5798af703aa53f2dd32b3b84df17b5104318d2df34f25b37 Nov 28 00:25:44 crc kubenswrapper[3556]: I1128 00:25:44.476753 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-755d7666d5-kjtgj"] Nov 28 00:25:44 crc kubenswrapper[3556]: I1128 00:25:44.871830 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-755d7666d5-kjtgj" event={"ID":"c5833d33-da6c-4528-8318-b5778f1cc080","Type":"ContainerStarted","Data":"cc8a5b6da076ad6d5798af703aa53f2dd32b3b84df17b5104318d2df34f25b37"} Nov 28 00:25:45 crc kubenswrapper[3556]: I1128 00:25:45.878505 3556 generic.go:334] "Generic (PLEG): container finished" podID="62c48da3-a94b-494b-aee7-29345ef503fd" containerID="e5f0cae60fa895ca609331cb5ac1e8224790f738fee7275743ab6fa2a43e67f4" exitCode=0 Nov 28 00:25:45 crc kubenswrapper[3556]: I1128 00:25:45.878586 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"62c48da3-a94b-494b-aee7-29345ef503fd","Type":"ContainerDied","Data":"e5f0cae60fa895ca609331cb5ac1e8224790f738fee7275743ab6fa2a43e67f4"} Nov 28 00:25:46 crc kubenswrapper[3556]: I1128 00:25:46.883605 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-755d7666d5-kjtgj" event={"ID":"c5833d33-da6c-4528-8318-b5778f1cc080","Type":"ContainerStarted","Data":"e7b0412bac209800ca595886931944648564d614561e6ce0e7d2fe3963aa6798"} Nov 28 00:25:47 crc kubenswrapper[3556]: I1128 00:25:47.604761 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-58ffc98b58-4dq5m" Nov 28 00:25:47 crc kubenswrapper[3556]: I1128 00:25:47.893261 3556 generic.go:334] "Generic (PLEG): container finished" podID="62c48da3-a94b-494b-aee7-29345ef503fd" containerID="652c82a8cc971cd07650e5067eb99c5a560bf7dd877d996d159c6d2d1a1b855f" exitCode=0 Nov 28 00:25:47 crc kubenswrapper[3556]: I1128 00:25:47.893304 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"62c48da3-a94b-494b-aee7-29345ef503fd","Type":"ContainerDied","Data":"652c82a8cc971cd07650e5067eb99c5a560bf7dd877d996d159c6d2d1a1b855f"} Nov 28 00:25:47 crc kubenswrapper[3556]: I1128 00:25:47.940541 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="cert-manager/cert-manager-755d7666d5-kjtgj" podStartSLOduration=4.940503457 podStartE2EDuration="4.940503457s" podCreationTimestamp="2025-11-28 00:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:25:47.921701288 +0000 UTC m=+809.513933268" watchObservedRunningTime="2025-11-28 00:25:47.940503457 +0000 UTC m=+809.532735457" Nov 28 00:25:48 crc kubenswrapper[3556]: I1128 00:25:48.039536 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_62c48da3-a94b-494b-aee7-29345ef503fd/manage-dockerfile/0.log" Nov 28 00:25:48 crc kubenswrapper[3556]: I1128 00:25:48.899318 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"62c48da3-a94b-494b-aee7-29345ef503fd","Type":"ContainerStarted","Data":"550207af96d02352a5d48fb3114aaf2f3506b574f2a60ffd4f162d3780ed4f16"} Nov 28 00:25:48 crc kubenswrapper[3556]: I1128 00:25:48.942545 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=17.942495448 podStartE2EDuration="17.942495448s" podCreationTimestamp="2025-11-28 00:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:25:48.935704663 +0000 UTC m=+810.527936663" watchObservedRunningTime="2025-11-28 00:25:48.942495448 +0000 UTC m=+810.534727458" Nov 28 00:25:50 crc kubenswrapper[3556]: I1128 00:25:50.580053 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="df285d49-46a0-4b41-8d8b-7493edd5e268" containerName="elasticsearch" probeResult="failure" output=< Nov 28 00:25:50 crc kubenswrapper[3556]: {"timestamp": "2025-11-28T00:25:50+00:00", "message": "readiness probe failed", "curl_rc": "7"} Nov 28 00:25:50 crc kubenswrapper[3556]: > Nov 28 00:25:52 crc kubenswrapper[3556]: I1128 00:25:52.664174 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:25:52 crc kubenswrapper[3556]: I1128 00:25:52.664555 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:25:55 crc kubenswrapper[3556]: I1128 00:25:55.693096 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Nov 28 00:26:15 crc kubenswrapper[3556]: I1128 00:26:15.770201 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6cd7"] Nov 28 00:26:15 crc kubenswrapper[3556]: I1128 00:26:15.774174 3556 topology_manager.go:215] "Topology Admit Handler" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" podNamespace="openshift-marketplace" podName="certified-operators-g6cd7" Nov 28 00:26:15 crc kubenswrapper[3556]: I1128 00:26:15.775146 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6cd7"] Nov 28 00:26:15 crc kubenswrapper[3556]: I1128 00:26:15.775212 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:15 crc kubenswrapper[3556]: I1128 00:26:15.916895 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk8kl\" (UniqueName: \"kubernetes.io/projected/405a7c74-396f-4ba0-ae7f-cea2285c37a3-kube-api-access-bk8kl\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:15 crc kubenswrapper[3556]: I1128 00:26:15.917191 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-utilities\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:15 crc kubenswrapper[3556]: I1128 00:26:15.917255 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-catalog-content\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:16 crc kubenswrapper[3556]: I1128 00:26:16.018529 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kl\" (UniqueName: \"kubernetes.io/projected/405a7c74-396f-4ba0-ae7f-cea2285c37a3-kube-api-access-bk8kl\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:16 crc kubenswrapper[3556]: I1128 00:26:16.018689 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-utilities\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:16 crc kubenswrapper[3556]: I1128 00:26:16.018747 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-catalog-content\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:16 crc kubenswrapper[3556]: I1128 00:26:16.019292 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-catalog-content\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:16 crc kubenswrapper[3556]: I1128 00:26:16.019752 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-utilities\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:16 crc kubenswrapper[3556]: I1128 00:26:16.044909 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk8kl\" (UniqueName: \"kubernetes.io/projected/405a7c74-396f-4ba0-ae7f-cea2285c37a3-kube-api-access-bk8kl\") pod \"certified-operators-g6cd7\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:16 crc kubenswrapper[3556]: I1128 00:26:16.106318 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:16 crc kubenswrapper[3556]: I1128 00:26:16.352372 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6cd7"] Nov 28 00:26:17 crc kubenswrapper[3556]: I1128 00:26:17.031420 3556 generic.go:334] "Generic (PLEG): container finished" podID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerID="e94df1ffadd728421d73e219f14cbf731c70d0df4e0d700bf905fd04c2646680" exitCode=0 Nov 28 00:26:17 crc kubenswrapper[3556]: I1128 00:26:17.031470 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6cd7" event={"ID":"405a7c74-396f-4ba0-ae7f-cea2285c37a3","Type":"ContainerDied","Data":"e94df1ffadd728421d73e219f14cbf731c70d0df4e0d700bf905fd04c2646680"} Nov 28 00:26:17 crc kubenswrapper[3556]: I1128 00:26:17.031722 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6cd7" event={"ID":"405a7c74-396f-4ba0-ae7f-cea2285c37a3","Type":"ContainerStarted","Data":"9f86ed50887629b1cac738770f2f295bf7de25d1eeac0a380d67ea7e636b5b8d"} Nov 28 00:26:18 crc kubenswrapper[3556]: I1128 00:26:18.700441 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:26:18 crc kubenswrapper[3556]: I1128 00:26:18.700801 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:26:18 crc kubenswrapper[3556]: I1128 00:26:18.700832 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:26:18 crc kubenswrapper[3556]: I1128 00:26:18.700871 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:26:18 crc kubenswrapper[3556]: I1128 00:26:18.700914 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:26:19 crc kubenswrapper[3556]: I1128 00:26:19.043474 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6cd7" event={"ID":"405a7c74-396f-4ba0-ae7f-cea2285c37a3","Type":"ContainerStarted","Data":"19e9a4d403864cfeaac6c69ce2e267085210702aa2d80cf80e00298dd9028b2e"} Nov 28 00:26:22 crc kubenswrapper[3556]: I1128 00:26:22.664126 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:26:22 crc kubenswrapper[3556]: I1128 00:26:22.664462 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:26:22 crc kubenswrapper[3556]: I1128 00:26:22.664499 3556 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:26:22 crc kubenswrapper[3556]: I1128 00:26:22.665239 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88c4fb4cb642fcbc411ede2f7fa1488222a3e7056a17bfed36ddfaeda62f2163"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 00:26:22 crc kubenswrapper[3556]: I1128 00:26:22.665415 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://88c4fb4cb642fcbc411ede2f7fa1488222a3e7056a17bfed36ddfaeda62f2163" gracePeriod=600 Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.062769 3556 generic.go:334] "Generic (PLEG): container finished" podID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerID="19e9a4d403864cfeaac6c69ce2e267085210702aa2d80cf80e00298dd9028b2e" exitCode=0 Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.062807 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6cd7" event={"ID":"405a7c74-396f-4ba0-ae7f-cea2285c37a3","Type":"ContainerDied","Data":"19e9a4d403864cfeaac6c69ce2e267085210702aa2d80cf80e00298dd9028b2e"} Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.367198 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zzb7d"] Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.367551 3556 topology_manager.go:215] "Topology Admit Handler" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" podNamespace="openshift-marketplace" podName="redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.368591 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.394249 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzb7d"] Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.504455 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-utilities\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.504535 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtn79\" (UniqueName: \"kubernetes.io/projected/235e73a2-df90-4023-bf19-8b8525f9f430-kube-api-access-qtn79\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.504595 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-catalog-content\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.606032 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-utilities\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.606130 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-qtn79\" (UniqueName: \"kubernetes.io/projected/235e73a2-df90-4023-bf19-8b8525f9f430-kube-api-access-qtn79\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.606166 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-catalog-content\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.606536 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-utilities\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.606580 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-catalog-content\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.647494 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtn79\" (UniqueName: \"kubernetes.io/projected/235e73a2-df90-4023-bf19-8b8525f9f430-kube-api-access-qtn79\") pod \"redhat-operators-zzb7d\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:23 crc kubenswrapper[3556]: I1128 00:26:23.681373 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:26:24 crc kubenswrapper[3556]: I1128 00:26:24.076470 3556 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="88c4fb4cb642fcbc411ede2f7fa1488222a3e7056a17bfed36ddfaeda62f2163" exitCode=0 Nov 28 00:26:24 crc kubenswrapper[3556]: I1128 00:26:24.076528 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"88c4fb4cb642fcbc411ede2f7fa1488222a3e7056a17bfed36ddfaeda62f2163"} Nov 28 00:26:24 crc kubenswrapper[3556]: I1128 00:26:24.076781 3556 scope.go:117] "RemoveContainer" containerID="756add6244838c2be85afcde4726595ecd7b69e02660adc403684ace5b7b9f01" Nov 28 00:26:24 crc kubenswrapper[3556]: I1128 00:26:24.229232 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzb7d"] Nov 28 00:26:24 crc kubenswrapper[3556]: W1128 00:26:24.234385 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod235e73a2_df90_4023_bf19_8b8525f9f430.slice/crio-aa95a53cf3cc880574fcc93ca6b7ffc6cef04c3ac0039bc80ae2f353f3b21e6e WatchSource:0}: Error finding container aa95a53cf3cc880574fcc93ca6b7ffc6cef04c3ac0039bc80ae2f353f3b21e6e: Status 404 returned error can't find the container with id aa95a53cf3cc880574fcc93ca6b7ffc6cef04c3ac0039bc80ae2f353f3b21e6e Nov 28 00:26:25 crc kubenswrapper[3556]: I1128 00:26:25.082255 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzb7d" event={"ID":"235e73a2-df90-4023-bf19-8b8525f9f430","Type":"ContainerStarted","Data":"aa95a53cf3cc880574fcc93ca6b7ffc6cef04c3ac0039bc80ae2f353f3b21e6e"} Nov 28 00:26:26 crc kubenswrapper[3556]: I1128 00:26:26.092571 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6cd7" event={"ID":"405a7c74-396f-4ba0-ae7f-cea2285c37a3","Type":"ContainerStarted","Data":"98e5907bf4bc5c2f7d27ee0538787787c47f81d7675d132916bf83b71552befd"} Nov 28 00:26:26 crc kubenswrapper[3556]: I1128 00:26:26.106717 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:26 crc kubenswrapper[3556]: I1128 00:26:26.106758 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:26 crc kubenswrapper[3556]: I1128 00:26:26.120716 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6cd7" podStartSLOduration=4.77404525 podStartE2EDuration="11.12067693s" podCreationTimestamp="2025-11-28 00:26:15 +0000 UTC" firstStartedPulling="2025-11-28 00:26:17.033173343 +0000 UTC m=+838.625405333" lastFinishedPulling="2025-11-28 00:26:23.379805033 +0000 UTC m=+844.972037013" observedRunningTime="2025-11-28 00:26:26.12064778 +0000 UTC m=+847.712879780" watchObservedRunningTime="2025-11-28 00:26:26.12067693 +0000 UTC m=+847.712908930" Nov 28 00:26:27 crc kubenswrapper[3556]: I1128 00:26:27.099504 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"c3ebc645fbf92d88e5d7c56ce745d2dd963c7e740b9cfb31c7edff11fbc1c74b"} Nov 28 00:26:27 crc kubenswrapper[3556]: I1128 00:26:27.188063 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g6cd7" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="registry-server" probeResult="failure" output=< Nov 28 00:26:27 crc kubenswrapper[3556]: timeout: failed to connect service ":50051" within 1s Nov 28 00:26:27 crc kubenswrapper[3556]: > Nov 28 00:26:29 crc kubenswrapper[3556]: I1128 00:26:29.109404 3556 generic.go:334] "Generic (PLEG): container finished" podID="235e73a2-df90-4023-bf19-8b8525f9f430" containerID="a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555" exitCode=0 Nov 28 00:26:29 crc kubenswrapper[3556]: I1128 00:26:29.109462 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzb7d" event={"ID":"235e73a2-df90-4023-bf19-8b8525f9f430","Type":"ContainerDied","Data":"a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555"} Nov 28 00:26:31 crc kubenswrapper[3556]: I1128 00:26:31.121211 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzb7d" event={"ID":"235e73a2-df90-4023-bf19-8b8525f9f430","Type":"ContainerStarted","Data":"935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc"} Nov 28 00:26:36 crc kubenswrapper[3556]: I1128 00:26:36.230760 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:36 crc kubenswrapper[3556]: I1128 00:26:36.314034 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:36 crc kubenswrapper[3556]: I1128 00:26:36.350850 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6cd7"] Nov 28 00:26:38 crc kubenswrapper[3556]: I1128 00:26:38.160946 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6cd7" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="registry-server" containerID="cri-o://98e5907bf4bc5c2f7d27ee0538787787c47f81d7675d132916bf83b71552befd" gracePeriod=2 Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.174482 3556 generic.go:334] "Generic (PLEG): container finished" podID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerID="98e5907bf4bc5c2f7d27ee0538787787c47f81d7675d132916bf83b71552befd" exitCode=0 Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.174547 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6cd7" event={"ID":"405a7c74-396f-4ba0-ae7f-cea2285c37a3","Type":"ContainerDied","Data":"98e5907bf4bc5c2f7d27ee0538787787c47f81d7675d132916bf83b71552befd"} Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.397053 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.522397 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk8kl\" (UniqueName: \"kubernetes.io/projected/405a7c74-396f-4ba0-ae7f-cea2285c37a3-kube-api-access-bk8kl\") pod \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.522486 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-utilities\") pod \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.522573 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-catalog-content\") pod \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\" (UID: \"405a7c74-396f-4ba0-ae7f-cea2285c37a3\") " Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.526686 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-utilities" (OuterVolumeSpecName: "utilities") pod "405a7c74-396f-4ba0-ae7f-cea2285c37a3" (UID: "405a7c74-396f-4ba0-ae7f-cea2285c37a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.534195 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/405a7c74-396f-4ba0-ae7f-cea2285c37a3-kube-api-access-bk8kl" (OuterVolumeSpecName: "kube-api-access-bk8kl") pod "405a7c74-396f-4ba0-ae7f-cea2285c37a3" (UID: "405a7c74-396f-4ba0-ae7f-cea2285c37a3"). InnerVolumeSpecName "kube-api-access-bk8kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.624537 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bk8kl\" (UniqueName: \"kubernetes.io/projected/405a7c74-396f-4ba0-ae7f-cea2285c37a3-kube-api-access-bk8kl\") on node \"crc\" DevicePath \"\"" Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.624582 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.743143 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "405a7c74-396f-4ba0-ae7f-cea2285c37a3" (UID: "405a7c74-396f-4ba0-ae7f-cea2285c37a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:26:40 crc kubenswrapper[3556]: I1128 00:26:40.826777 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405a7c74-396f-4ba0-ae7f-cea2285c37a3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:26:41 crc kubenswrapper[3556]: I1128 00:26:41.181477 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6cd7" event={"ID":"405a7c74-396f-4ba0-ae7f-cea2285c37a3","Type":"ContainerDied","Data":"9f86ed50887629b1cac738770f2f295bf7de25d1eeac0a380d67ea7e636b5b8d"} Nov 28 00:26:41 crc kubenswrapper[3556]: I1128 00:26:41.181522 3556 scope.go:117] "RemoveContainer" containerID="98e5907bf4bc5c2f7d27ee0538787787c47f81d7675d132916bf83b71552befd" Nov 28 00:26:41 crc kubenswrapper[3556]: I1128 00:26:41.181635 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6cd7" Nov 28 00:26:41 crc kubenswrapper[3556]: I1128 00:26:41.218597 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6cd7"] Nov 28 00:26:41 crc kubenswrapper[3556]: I1128 00:26:41.224257 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6cd7"] Nov 28 00:26:41 crc kubenswrapper[3556]: I1128 00:26:41.345905 3556 scope.go:117] "RemoveContainer" containerID="19e9a4d403864cfeaac6c69ce2e267085210702aa2d80cf80e00298dd9028b2e" Nov 28 00:26:41 crc kubenswrapper[3556]: I1128 00:26:41.383935 3556 scope.go:117] "RemoveContainer" containerID="e94df1ffadd728421d73e219f14cbf731c70d0df4e0d700bf905fd04c2646680" Nov 28 00:26:42 crc kubenswrapper[3556]: I1128 00:26:42.919741 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" path="/var/lib/kubelet/pods/405a7c74-396f-4ba0-ae7f-cea2285c37a3/volumes" Nov 28 00:26:57 crc kubenswrapper[3556]: I1128 00:26:57.131788 3556 generic.go:334] "Generic (PLEG): container finished" podID="235e73a2-df90-4023-bf19-8b8525f9f430" containerID="935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc" exitCode=0 Nov 28 00:26:57 crc kubenswrapper[3556]: I1128 00:26:57.131831 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzb7d" event={"ID":"235e73a2-df90-4023-bf19-8b8525f9f430","Type":"ContainerDied","Data":"935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc"} Nov 28 00:26:59 crc kubenswrapper[3556]: I1128 00:26:59.143885 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzb7d" event={"ID":"235e73a2-df90-4023-bf19-8b8525f9f430","Type":"ContainerStarted","Data":"21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620"} Nov 28 00:26:59 crc kubenswrapper[3556]: I1128 00:26:59.167855 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zzb7d" podStartSLOduration=8.638902126 podStartE2EDuration="36.167806342s" podCreationTimestamp="2025-11-28 00:26:23 +0000 UTC" firstStartedPulling="2025-11-28 00:26:30.117905868 +0000 UTC m=+851.710137868" lastFinishedPulling="2025-11-28 00:26:57.646810094 +0000 UTC m=+879.239042084" observedRunningTime="2025-11-28 00:26:59.162149058 +0000 UTC m=+880.754381048" watchObservedRunningTime="2025-11-28 00:26:59.167806342 +0000 UTC m=+880.760038352" Nov 28 00:27:03 crc kubenswrapper[3556]: I1128 00:27:03.681708 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:27:03 crc kubenswrapper[3556]: I1128 00:27:03.682308 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:27:04 crc kubenswrapper[3556]: I1128 00:27:04.880041 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zzb7d" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="registry-server" probeResult="failure" output=< Nov 28 00:27:04 crc kubenswrapper[3556]: timeout: failed to connect service ":50051" within 1s Nov 28 00:27:04 crc kubenswrapper[3556]: > Nov 28 00:27:13 crc kubenswrapper[3556]: I1128 00:27:13.799380 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:27:13 crc kubenswrapper[3556]: I1128 00:27:13.916662 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:27:13 crc kubenswrapper[3556]: I1128 00:27:13.960963 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzb7d"] Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.231983 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zzb7d" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="registry-server" containerID="cri-o://21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620" gracePeriod=2 Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.544738 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.687317 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-catalog-content\") pod \"235e73a2-df90-4023-bf19-8b8525f9f430\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.687380 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-utilities\") pod \"235e73a2-df90-4023-bf19-8b8525f9f430\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.687436 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtn79\" (UniqueName: \"kubernetes.io/projected/235e73a2-df90-4023-bf19-8b8525f9f430-kube-api-access-qtn79\") pod \"235e73a2-df90-4023-bf19-8b8525f9f430\" (UID: \"235e73a2-df90-4023-bf19-8b8525f9f430\") " Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.688205 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-utilities" (OuterVolumeSpecName: "utilities") pod "235e73a2-df90-4023-bf19-8b8525f9f430" (UID: "235e73a2-df90-4023-bf19-8b8525f9f430"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.695323 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235e73a2-df90-4023-bf19-8b8525f9f430-kube-api-access-qtn79" (OuterVolumeSpecName: "kube-api-access-qtn79") pod "235e73a2-df90-4023-bf19-8b8525f9f430" (UID: "235e73a2-df90-4023-bf19-8b8525f9f430"). InnerVolumeSpecName "kube-api-access-qtn79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.789054 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qtn79\" (UniqueName: \"kubernetes.io/projected/235e73a2-df90-4023-bf19-8b8525f9f430-kube-api-access-qtn79\") on node \"crc\" DevicePath \"\"" Nov 28 00:27:15 crc kubenswrapper[3556]: I1128 00:27:15.789101 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.239093 3556 generic.go:334] "Generic (PLEG): container finished" podID="235e73a2-df90-4023-bf19-8b8525f9f430" containerID="21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620" exitCode=0 Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.239135 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzb7d" event={"ID":"235e73a2-df90-4023-bf19-8b8525f9f430","Type":"ContainerDied","Data":"21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620"} Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.239158 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzb7d" event={"ID":"235e73a2-df90-4023-bf19-8b8525f9f430","Type":"ContainerDied","Data":"aa95a53cf3cc880574fcc93ca6b7ffc6cef04c3ac0039bc80ae2f353f3b21e6e"} Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.239178 3556 scope.go:117] "RemoveContainer" containerID="21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.239293 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzb7d" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.273869 3556 scope.go:117] "RemoveContainer" containerID="935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.332116 3556 scope.go:117] "RemoveContainer" containerID="a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.354858 3556 scope.go:117] "RemoveContainer" containerID="21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620" Nov 28 00:27:16 crc kubenswrapper[3556]: E1128 00:27:16.355483 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620\": container with ID starting with 21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620 not found: ID does not exist" containerID="21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.355550 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620"} err="failed to get container status \"21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620\": rpc error: code = NotFound desc = could not find container \"21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620\": container with ID starting with 21773e721150579d9548f438eea1b4f54c910b46eb59749c5acbb6b06bb6b620 not found: ID does not exist" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.355567 3556 scope.go:117] "RemoveContainer" containerID="935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc" Nov 28 00:27:16 crc kubenswrapper[3556]: E1128 00:27:16.356071 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc\": container with ID starting with 935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc not found: ID does not exist" containerID="935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.356105 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc"} err="failed to get container status \"935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc\": rpc error: code = NotFound desc = could not find container \"935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc\": container with ID starting with 935aafbf0991206afd33ae0fa163431499c08615fdd535c0088499aa615be9cc not found: ID does not exist" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.356118 3556 scope.go:117] "RemoveContainer" containerID="a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555" Nov 28 00:27:16 crc kubenswrapper[3556]: E1128 00:27:16.356479 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555\": container with ID starting with a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555 not found: ID does not exist" containerID="a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.356513 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555"} err="failed to get container status \"a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555\": rpc error: code = NotFound desc = could not find container \"a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555\": container with ID starting with a180f5a07c49207fee36ce854959f053175fcc11ce8b82fa5fe41d4d2fb2c555 not found: ID does not exist" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.512542 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "235e73a2-df90-4023-bf19-8b8525f9f430" (UID: "235e73a2-df90-4023-bf19-8b8525f9f430"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.580504 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzb7d"] Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.589656 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zzb7d"] Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.612431 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235e73a2-df90-4023-bf19-8b8525f9f430-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:27:16 crc kubenswrapper[3556]: I1128 00:27:16.919406 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" path="/var/lib/kubelet/pods/235e73a2-df90-4023-bf19-8b8525f9f430/volumes" Nov 28 00:27:18 crc kubenswrapper[3556]: I1128 00:27:18.701861 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:27:18 crc kubenswrapper[3556]: I1128 00:27:18.703503 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:27:18 crc kubenswrapper[3556]: I1128 00:27:18.703636 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:27:18 crc kubenswrapper[3556]: I1128 00:27:18.703774 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:27:18 crc kubenswrapper[3556]: I1128 00:27:18.703928 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.349345 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rjw2b"] Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.349993 3556 topology_manager.go:215] "Topology Admit Handler" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" podNamespace="openshift-marketplace" podName="community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: E1128 00:27:30.350191 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="extract-utilities" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.350207 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="extract-utilities" Nov 28 00:27:30 crc kubenswrapper[3556]: E1128 00:27:30.350220 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="registry-server" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.350227 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="registry-server" Nov 28 00:27:30 crc kubenswrapper[3556]: E1128 00:27:30.350243 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="registry-server" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.350251 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="registry-server" Nov 28 00:27:30 crc kubenswrapper[3556]: E1128 00:27:30.350262 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="extract-utilities" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.350269 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="extract-utilities" Nov 28 00:27:30 crc kubenswrapper[3556]: E1128 00:27:30.350280 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="extract-content" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.350287 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="extract-content" Nov 28 00:27:30 crc kubenswrapper[3556]: E1128 00:27:30.350302 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="extract-content" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.350309 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="extract-content" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.350466 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="235e73a2-df90-4023-bf19-8b8525f9f430" containerName="registry-server" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.350482 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="405a7c74-396f-4ba0-ae7f-cea2285c37a3" containerName="registry-server" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.351384 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.377285 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rjw2b"] Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.444028 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdxtv\" (UniqueName: \"kubernetes.io/projected/6e090530-c284-429e-a54f-7c83b171b3ec-kube-api-access-pdxtv\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.444283 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-catalog-content\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.444309 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-utilities\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.545849 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-pdxtv\" (UniqueName: \"kubernetes.io/projected/6e090530-c284-429e-a54f-7c83b171b3ec-kube-api-access-pdxtv\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.545909 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-catalog-content\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.545946 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-utilities\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.546500 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-catalog-content\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.546511 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-utilities\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.568206 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdxtv\" (UniqueName: \"kubernetes.io/projected/6e090530-c284-429e-a54f-7c83b171b3ec-kube-api-access-pdxtv\") pod \"community-operators-rjw2b\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.667149 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:30 crc kubenswrapper[3556]: I1128 00:27:30.939614 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rjw2b"] Nov 28 00:27:31 crc kubenswrapper[3556]: I1128 00:27:31.323450 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjw2b" event={"ID":"6e090530-c284-429e-a54f-7c83b171b3ec","Type":"ContainerStarted","Data":"7cec89f33b7cdc0beaf60b759eb7f496b58a296fddcfbdbe5249c5a6e721c9c4"} Nov 28 00:27:32 crc kubenswrapper[3556]: I1128 00:27:32.329157 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjw2b" event={"ID":"6e090530-c284-429e-a54f-7c83b171b3ec","Type":"ContainerStarted","Data":"56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f"} Nov 28 00:27:33 crc kubenswrapper[3556]: I1128 00:27:33.334646 3556 generic.go:334] "Generic (PLEG): container finished" podID="6e090530-c284-429e-a54f-7c83b171b3ec" containerID="56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f" exitCode=0 Nov 28 00:27:33 crc kubenswrapper[3556]: I1128 00:27:33.334685 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjw2b" event={"ID":"6e090530-c284-429e-a54f-7c83b171b3ec","Type":"ContainerDied","Data":"56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f"} Nov 28 00:27:34 crc kubenswrapper[3556]: I1128 00:27:34.341915 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjw2b" event={"ID":"6e090530-c284-429e-a54f-7c83b171b3ec","Type":"ContainerStarted","Data":"9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a"} Nov 28 00:27:37 crc kubenswrapper[3556]: I1128 00:27:37.356103 3556 generic.go:334] "Generic (PLEG): container finished" podID="6e090530-c284-429e-a54f-7c83b171b3ec" containerID="9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a" exitCode=0 Nov 28 00:27:37 crc kubenswrapper[3556]: I1128 00:27:37.356159 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjw2b" event={"ID":"6e090530-c284-429e-a54f-7c83b171b3ec","Type":"ContainerDied","Data":"9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a"} Nov 28 00:27:38 crc kubenswrapper[3556]: I1128 00:27:38.362850 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjw2b" event={"ID":"6e090530-c284-429e-a54f-7c83b171b3ec","Type":"ContainerStarted","Data":"791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72"} Nov 28 00:27:38 crc kubenswrapper[3556]: I1128 00:27:38.382352 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rjw2b" podStartSLOduration=4.081166126 podStartE2EDuration="8.382307217s" podCreationTimestamp="2025-11-28 00:27:30 +0000 UTC" firstStartedPulling="2025-11-28 00:27:33.336432957 +0000 UTC m=+914.928664947" lastFinishedPulling="2025-11-28 00:27:37.637574048 +0000 UTC m=+919.229806038" observedRunningTime="2025-11-28 00:27:38.382075565 +0000 UTC m=+919.974307555" watchObservedRunningTime="2025-11-28 00:27:38.382307217 +0000 UTC m=+919.974539217" Nov 28 00:27:40 crc kubenswrapper[3556]: I1128 00:27:40.667929 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:40 crc kubenswrapper[3556]: I1128 00:27:40.669369 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:41 crc kubenswrapper[3556]: I1128 00:27:41.759508 3556 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rjw2b" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="registry-server" probeResult="failure" output=< Nov 28 00:27:41 crc kubenswrapper[3556]: timeout: failed to connect service ":50051" within 1s Nov 28 00:27:41 crc kubenswrapper[3556]: > Nov 28 00:27:50 crc kubenswrapper[3556]: I1128 00:27:50.758079 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:50 crc kubenswrapper[3556]: I1128 00:27:50.880232 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:50 crc kubenswrapper[3556]: I1128 00:27:50.924070 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rjw2b"] Nov 28 00:27:52 crc kubenswrapper[3556]: I1128 00:27:52.434566 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rjw2b" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="registry-server" containerID="cri-o://791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72" gracePeriod=2 Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.307121 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.453594 3556 generic.go:334] "Generic (PLEG): container finished" podID="6e090530-c284-429e-a54f-7c83b171b3ec" containerID="791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72" exitCode=0 Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.453718 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rjw2b" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.453692 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjw2b" event={"ID":"6e090530-c284-429e-a54f-7c83b171b3ec","Type":"ContainerDied","Data":"791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72"} Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.453854 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rjw2b" event={"ID":"6e090530-c284-429e-a54f-7c83b171b3ec","Type":"ContainerDied","Data":"7cec89f33b7cdc0beaf60b759eb7f496b58a296fddcfbdbe5249c5a6e721c9c4"} Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.453878 3556 scope.go:117] "RemoveContainer" containerID="791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.455593 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-catalog-content\") pod \"6e090530-c284-429e-a54f-7c83b171b3ec\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.455626 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdxtv\" (UniqueName: \"kubernetes.io/projected/6e090530-c284-429e-a54f-7c83b171b3ec-kube-api-access-pdxtv\") pod \"6e090530-c284-429e-a54f-7c83b171b3ec\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.455649 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-utilities\") pod \"6e090530-c284-429e-a54f-7c83b171b3ec\" (UID: \"6e090530-c284-429e-a54f-7c83b171b3ec\") " Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.456527 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-utilities" (OuterVolumeSpecName: "utilities") pod "6e090530-c284-429e-a54f-7c83b171b3ec" (UID: "6e090530-c284-429e-a54f-7c83b171b3ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.477091 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e090530-c284-429e-a54f-7c83b171b3ec-kube-api-access-pdxtv" (OuterVolumeSpecName: "kube-api-access-pdxtv") pod "6e090530-c284-429e-a54f-7c83b171b3ec" (UID: "6e090530-c284-429e-a54f-7c83b171b3ec"). InnerVolumeSpecName "kube-api-access-pdxtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.498172 3556 scope.go:117] "RemoveContainer" containerID="9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.556926 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pdxtv\" (UniqueName: \"kubernetes.io/projected/6e090530-c284-429e-a54f-7c83b171b3ec-kube-api-access-pdxtv\") on node \"crc\" DevicePath \"\"" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.557280 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.557129 3556 scope.go:117] "RemoveContainer" containerID="56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.596001 3556 scope.go:117] "RemoveContainer" containerID="791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72" Nov 28 00:27:53 crc kubenswrapper[3556]: E1128 00:27:53.598753 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72\": container with ID starting with 791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72 not found: ID does not exist" containerID="791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.598821 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72"} err="failed to get container status \"791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72\": rpc error: code = NotFound desc = could not find container \"791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72\": container with ID starting with 791dd9dd28e6a83dc89cbe3073e155666686b4c8d71cafb1bc1fa99a8950ea72 not found: ID does not exist" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.598835 3556 scope.go:117] "RemoveContainer" containerID="9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a" Nov 28 00:27:53 crc kubenswrapper[3556]: E1128 00:27:53.599127 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a\": container with ID starting with 9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a not found: ID does not exist" containerID="9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.599158 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a"} err="failed to get container status \"9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a\": rpc error: code = NotFound desc = could not find container \"9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a\": container with ID starting with 9de9ac2ae3a52d2a32bf9ecdd290c6f00253300754ac5a5a49ac5dcfd6b4365a not found: ID does not exist" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.599169 3556 scope.go:117] "RemoveContainer" containerID="56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f" Nov 28 00:27:53 crc kubenswrapper[3556]: E1128 00:27:53.600251 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f\": container with ID starting with 56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f not found: ID does not exist" containerID="56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f" Nov 28 00:27:53 crc kubenswrapper[3556]: I1128 00:27:53.600283 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f"} err="failed to get container status \"56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f\": rpc error: code = NotFound desc = could not find container \"56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f\": container with ID starting with 56c8d8e4dd34931718d8166f9a68628e3980df14e712b8fdf70ce62a44f6805f not found: ID does not exist" Nov 28 00:27:54 crc kubenswrapper[3556]: I1128 00:27:54.004091 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e090530-c284-429e-a54f-7c83b171b3ec" (UID: "6e090530-c284-429e-a54f-7c83b171b3ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:27:54 crc kubenswrapper[3556]: I1128 00:27:54.065192 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e090530-c284-429e-a54f-7c83b171b3ec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:27:54 crc kubenswrapper[3556]: I1128 00:27:54.083217 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rjw2b"] Nov 28 00:27:54 crc kubenswrapper[3556]: I1128 00:27:54.086924 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rjw2b"] Nov 28 00:27:54 crc kubenswrapper[3556]: I1128 00:27:54.920818 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" path="/var/lib/kubelet/pods/6e090530-c284-429e-a54f-7c83b171b3ec/volumes" Nov 28 00:28:18 crc kubenswrapper[3556]: I1128 00:28:18.585129 3556 generic.go:334] "Generic (PLEG): container finished" podID="62c48da3-a94b-494b-aee7-29345ef503fd" containerID="550207af96d02352a5d48fb3114aaf2f3506b574f2a60ffd4f162d3780ed4f16" exitCode=0 Nov 28 00:28:18 crc kubenswrapper[3556]: I1128 00:28:18.585157 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"62c48da3-a94b-494b-aee7-29345ef503fd","Type":"ContainerDied","Data":"550207af96d02352a5d48fb3114aaf2f3506b574f2a60ffd4f162d3780ed4f16"} Nov 28 00:28:18 crc kubenswrapper[3556]: I1128 00:28:18.705142 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:28:18 crc kubenswrapper[3556]: I1128 00:28:18.705220 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:28:18 crc kubenswrapper[3556]: I1128 00:28:18.705247 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:28:18 crc kubenswrapper[3556]: I1128 00:28:18.705278 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:28:18 crc kubenswrapper[3556]: I1128 00:28:18.705308 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:28:19 crc kubenswrapper[3556]: I1128 00:28:19.849003 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018660 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-root\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018733 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-pull\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018772 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-buildworkdir\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018839 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-system-configs\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018863 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-proxy-ca-bundles\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018893 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-build-blob-cache\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018922 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-buildcachedir\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018948 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-run\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.018982 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-node-pullsecrets\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.019002 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-ca-bundles\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.019064 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-push\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.019103 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdsdt\" (UniqueName: \"kubernetes.io/projected/62c48da3-a94b-494b-aee7-29345ef503fd-kube-api-access-cdsdt\") pod \"62c48da3-a94b-494b-aee7-29345ef503fd\" (UID: \"62c48da3-a94b-494b-aee7-29345ef503fd\") " Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.019633 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.019684 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.019964 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.020365 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.020471 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.021608 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.038929 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.038978 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.039054 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c48da3-a94b-494b-aee7-29345ef503fd-kube-api-access-cdsdt" (OuterVolumeSpecName: "kube-api-access-cdsdt") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "kube-api-access-cdsdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.071182 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119889 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119921 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119932 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119944 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cdsdt\" (UniqueName: \"kubernetes.io/projected/62c48da3-a94b-494b-aee7-29345ef503fd-kube-api-access-cdsdt\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119954 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/62c48da3-a94b-494b-aee7-29345ef503fd-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119963 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119973 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119983 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62c48da3-a94b-494b-aee7-29345ef503fd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.119992 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/62c48da3-a94b-494b-aee7-29345ef503fd-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.120001 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.190442 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.220973 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.595545 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"62c48da3-a94b-494b-aee7-29345ef503fd","Type":"ContainerDied","Data":"22e30e913c85137f85dc5f38a2f13eb8709e132508efcfe38e35cb9888988d6d"} Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.595577 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e30e913c85137f85dc5f38a2f13eb8709e132508efcfe38e35cb9888988d6d" Nov 28 00:28:20 crc kubenswrapper[3556]: I1128 00:28:20.595625 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Nov 28 00:28:22 crc kubenswrapper[3556]: I1128 00:28:22.143500 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "62c48da3-a94b-494b-aee7-29345ef503fd" (UID: "62c48da3-a94b-494b-aee7-29345ef503fd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:28:22 crc kubenswrapper[3556]: I1128 00:28:22.149878 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/62c48da3-a94b-494b-aee7-29345ef503fd-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.531553 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.531970 3556 topology_manager.go:215] "Topology Admit Handler" podUID="abe8683d-994f-42a3-9231-68a39956df37" podNamespace="service-telemetry" podName="smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: E1128 00:28:24.532149 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="extract-utilities" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.532162 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="extract-utilities" Nov 28 00:28:24 crc kubenswrapper[3556]: E1128 00:28:24.532175 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="extract-content" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.532182 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="extract-content" Nov 28 00:28:24 crc kubenswrapper[3556]: E1128 00:28:24.532197 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="62c48da3-a94b-494b-aee7-29345ef503fd" containerName="manage-dockerfile" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.532203 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c48da3-a94b-494b-aee7-29345ef503fd" containerName="manage-dockerfile" Nov 28 00:28:24 crc kubenswrapper[3556]: E1128 00:28:24.532215 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="62c48da3-a94b-494b-aee7-29345ef503fd" containerName="git-clone" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.532222 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c48da3-a94b-494b-aee7-29345ef503fd" containerName="git-clone" Nov 28 00:28:24 crc kubenswrapper[3556]: E1128 00:28:24.532237 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="registry-server" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.532246 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="registry-server" Nov 28 00:28:24 crc kubenswrapper[3556]: E1128 00:28:24.532257 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="62c48da3-a94b-494b-aee7-29345ef503fd" containerName="docker-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.532265 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c48da3-a94b-494b-aee7-29345ef503fd" containerName="docker-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.532411 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c48da3-a94b-494b-aee7-29345ef503fd" containerName="docker-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.532429 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e090530-c284-429e-a54f-7c83b171b3ec" containerName="registry-server" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.533083 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.540505 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.540701 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ps7tk" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.540514 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.541074 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.565654 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682231 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682307 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682338 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682364 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682394 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682435 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m725p\" (UniqueName: \"kubernetes.io/projected/abe8683d-994f-42a3-9231-68a39956df37-kube-api-access-m725p\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682473 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682498 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682531 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682560 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682592 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-push\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.682619 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.783692 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.783759 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.783792 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.783822 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.783862 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.783908 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-m725p\" (UniqueName: \"kubernetes.io/projected/abe8683d-994f-42a3-9231-68a39956df37-kube-api-access-m725p\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.783946 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.783974 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784025 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784057 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784086 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-push\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784118 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784172 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784479 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784542 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784585 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784806 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784840 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.784992 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.785152 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.785278 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.797845 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.797877 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-push\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.808471 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-m725p\" (UniqueName: \"kubernetes.io/projected/abe8683d-994f-42a3-9231-68a39956df37-kube-api-access-m725p\") pod \"smart-gateway-operator-1-build\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:24 crc kubenswrapper[3556]: I1128 00:28:24.882568 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:25 crc kubenswrapper[3556]: I1128 00:28:25.083927 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 28 00:28:25 crc kubenswrapper[3556]: I1128 00:28:25.619973 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"abe8683d-994f-42a3-9231-68a39956df37","Type":"ContainerStarted","Data":"c50d370aab311c7f4f30ef5ed0abad97c62440ef87ec06fe1d2b518564434dd5"} Nov 28 00:28:26 crc kubenswrapper[3556]: I1128 00:28:26.631421 3556 generic.go:334] "Generic (PLEG): container finished" podID="abe8683d-994f-42a3-9231-68a39956df37" containerID="3185035b3cffcf57408db2ac536037b58961a9f10e0a5aa257d205cb928ab573" exitCode=0 Nov 28 00:28:26 crc kubenswrapper[3556]: I1128 00:28:26.631508 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"abe8683d-994f-42a3-9231-68a39956df37","Type":"ContainerDied","Data":"3185035b3cffcf57408db2ac536037b58961a9f10e0a5aa257d205cb928ab573"} Nov 28 00:28:27 crc kubenswrapper[3556]: I1128 00:28:27.637394 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"abe8683d-994f-42a3-9231-68a39956df37","Type":"ContainerStarted","Data":"3f49a2fa495542bb0ad2b5e5e3b54ff96e986f63d417dc9fa90463bd3981b784"} Nov 28 00:28:27 crc kubenswrapper[3556]: I1128 00:28:27.662610 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.662561238 podStartE2EDuration="3.662561238s" podCreationTimestamp="2025-11-28 00:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:28:27.660396426 +0000 UTC m=+969.252628426" watchObservedRunningTime="2025-11-28 00:28:27.662561238 +0000 UTC m=+969.254793238" Nov 28 00:28:35 crc kubenswrapper[3556]: I1128 00:28:35.218437 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 28 00:28:35 crc kubenswrapper[3556]: I1128 00:28:35.219255 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="abe8683d-994f-42a3-9231-68a39956df37" containerName="docker-build" containerID="cri-o://3f49a2fa495542bb0ad2b5e5e3b54ff96e986f63d417dc9fa90463bd3981b784" gracePeriod=30 Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.690339 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_abe8683d-994f-42a3-9231-68a39956df37/docker-build/0.log" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.690888 3556 generic.go:334] "Generic (PLEG): container finished" podID="abe8683d-994f-42a3-9231-68a39956df37" containerID="3f49a2fa495542bb0ad2b5e5e3b54ff96e986f63d417dc9fa90463bd3981b784" exitCode=1 Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.690938 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"abe8683d-994f-42a3-9231-68a39956df37","Type":"ContainerDied","Data":"3f49a2fa495542bb0ad2b5e5e3b54ff96e986f63d417dc9fa90463bd3981b784"} Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.901757 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.902213 3556 topology_manager.go:215] "Topology Admit Handler" podUID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" podNamespace="service-telemetry" podName="smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.903366 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.906296 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.909639 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.909692 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.921159 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.940486 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.940558 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6q9\" (UniqueName: \"kubernetes.io/projected/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-kube-api-access-9d6q9\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.940582 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.940849 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.940931 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.940977 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.941052 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.941156 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.941218 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.941291 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.941323 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:36 crc kubenswrapper[3556]: I1128 00:28:36.941363 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-push\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043438 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043512 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043556 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-push\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043607 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043667 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6q9\" (UniqueName: \"kubernetes.io/projected/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-kube-api-access-9d6q9\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043707 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043760 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043791 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043820 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043852 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043883 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.043924 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.044059 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.045196 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.045581 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.045736 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.045790 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.046224 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.047633 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.047848 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.048185 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.050325 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.050774 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-push\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.063227 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6q9\" (UniqueName: \"kubernetes.io/projected/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-kube-api-access-9d6q9\") pod \"smart-gateway-operator-2-build\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.225306 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.355538 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_abe8683d-994f-42a3-9231-68a39956df37/docker-build/0.log" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.356194 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.448921 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449058 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-build-blob-cache\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449101 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-buildcachedir\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449172 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-run\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449211 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-node-pullsecrets\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449272 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-root\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449303 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m725p\" (UniqueName: \"kubernetes.io/projected/abe8683d-994f-42a3-9231-68a39956df37-kube-api-access-m725p\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449405 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-buildworkdir\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449550 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-ca-bundles\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449645 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449588 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-pull\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449844 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-proxy-ca-bundles\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.450069 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-push\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.450111 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-system-configs\") pod \"abe8683d-994f-42a3-9231-68a39956df37\" (UID: \"abe8683d-994f-42a3-9231-68a39956df37\") " Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.451414 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.449512 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.453376 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.453408 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.453480 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.454193 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.454257 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.460247 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe8683d-994f-42a3-9231-68a39956df37-kube-api-access-m725p" (OuterVolumeSpecName: "kube-api-access-m725p") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "kube-api-access-m725p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.460465 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.464132 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.555228 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/abe8683d-994f-42a3-9231-68a39956df37-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.555540 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-m725p\" (UniqueName: \"kubernetes.io/projected/abe8683d-994f-42a3-9231-68a39956df37-kube-api-access-m725p\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.555609 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.555672 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.555733 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.555805 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/abe8683d-994f-42a3-9231-68a39956df37-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.555904 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/abe8683d-994f-42a3-9231-68a39956df37-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.696997 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_abe8683d-994f-42a3-9231-68a39956df37/docker-build/0.log" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.697381 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"abe8683d-994f-42a3-9231-68a39956df37","Type":"ContainerDied","Data":"c50d370aab311c7f4f30ef5ed0abad97c62440ef87ec06fe1d2b518564434dd5"} Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.697410 3556 scope.go:117] "RemoveContainer" containerID="3f49a2fa495542bb0ad2b5e5e3b54ff96e986f63d417dc9fa90463bd3981b784" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.697501 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Nov 28 00:28:37 crc kubenswrapper[3556]: I1128 00:28:37.699390 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5","Type":"ContainerStarted","Data":"d17aba5177e5662018d1e15a114fcf04fe7dfe38c9847d927b318131bcfe4103"} Nov 28 00:28:38 crc kubenswrapper[3556]: I1128 00:28:38.594443 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:28:38 crc kubenswrapper[3556]: I1128 00:28:38.669058 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:38 crc kubenswrapper[3556]: I1128 00:28:38.860759 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:28:38 crc kubenswrapper[3556]: I1128 00:28:38.871741 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:39 crc kubenswrapper[3556]: I1128 00:28:39.710372 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5","Type":"ContainerStarted","Data":"5b9dd8f77accfbdc72aba8a976998ca008d1aca96ed0b99246e19022be0cf78e"} Nov 28 00:28:40 crc kubenswrapper[3556]: I1128 00:28:40.748489 3556 scope.go:117] "RemoveContainer" containerID="3185035b3cffcf57408db2ac536037b58961a9f10e0a5aa257d205cb928ab573" Nov 28 00:28:40 crc kubenswrapper[3556]: I1128 00:28:40.884240 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "abe8683d-994f-42a3-9231-68a39956df37" (UID: "abe8683d-994f-42a3-9231-68a39956df37"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:28:40 crc kubenswrapper[3556]: I1128 00:28:40.896172 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/abe8683d-994f-42a3-9231-68a39956df37-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:28:41 crc kubenswrapper[3556]: I1128 00:28:41.015737 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 28 00:28:41 crc kubenswrapper[3556]: I1128 00:28:41.021915 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Nov 28 00:28:42 crc kubenswrapper[3556]: I1128 00:28:42.728508 3556 generic.go:334] "Generic (PLEG): container finished" podID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerID="5b9dd8f77accfbdc72aba8a976998ca008d1aca96ed0b99246e19022be0cf78e" exitCode=0 Nov 28 00:28:42 crc kubenswrapper[3556]: I1128 00:28:42.728565 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5","Type":"ContainerDied","Data":"5b9dd8f77accfbdc72aba8a976998ca008d1aca96ed0b99246e19022be0cf78e"} Nov 28 00:28:42 crc kubenswrapper[3556]: I1128 00:28:42.922358 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe8683d-994f-42a3-9231-68a39956df37" path="/var/lib/kubelet/pods/abe8683d-994f-42a3-9231-68a39956df37/volumes" Nov 28 00:28:43 crc kubenswrapper[3556]: I1128 00:28:43.734366 3556 generic.go:334] "Generic (PLEG): container finished" podID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerID="24ea0099069822a98b5d07a045731c7c343ce6e2d31180e9ba0d9527444424d4" exitCode=0 Nov 28 00:28:43 crc kubenswrapper[3556]: I1128 00:28:43.734401 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5","Type":"ContainerDied","Data":"24ea0099069822a98b5d07a045731c7c343ce6e2d31180e9ba0d9527444424d4"} Nov 28 00:28:43 crc kubenswrapper[3556]: I1128 00:28:43.768779 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_5c65ce5f-58fd-4b83-97dc-e54eefad1ce5/manage-dockerfile/0.log" Nov 28 00:28:44 crc kubenswrapper[3556]: I1128 00:28:44.741043 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5","Type":"ContainerStarted","Data":"28546ff4241c237ea08ee724f2e60bb15bd0134a092a93086ee0e832068bad1b"} Nov 28 00:28:44 crc kubenswrapper[3556]: I1128 00:28:44.767301 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=8.767264339 podStartE2EDuration="8.767264339s" podCreationTimestamp="2025-11-28 00:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:28:44.76391593 +0000 UTC m=+986.356147940" watchObservedRunningTime="2025-11-28 00:28:44.767264339 +0000 UTC m=+986.359496339" Nov 28 00:28:52 crc kubenswrapper[3556]: I1128 00:28:52.664062 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:28:52 crc kubenswrapper[3556]: I1128 00:28:52.664572 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:29:18 crc kubenswrapper[3556]: I1128 00:29:18.705973 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:29:18 crc kubenswrapper[3556]: I1128 00:29:18.706535 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:29:18 crc kubenswrapper[3556]: I1128 00:29:18.706566 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:29:18 crc kubenswrapper[3556]: I1128 00:29:18.706599 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:29:18 crc kubenswrapper[3556]: I1128 00:29:18.706630 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:29:22 crc kubenswrapper[3556]: I1128 00:29:22.663614 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:29:22 crc kubenswrapper[3556]: I1128 00:29:22.664174 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:29:52 crc kubenswrapper[3556]: I1128 00:29:52.664400 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:29:52 crc kubenswrapper[3556]: I1128 00:29:52.664886 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:29:52 crc kubenswrapper[3556]: I1128 00:29:52.664925 3556 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:29:52 crc kubenswrapper[3556]: I1128 00:29:52.665660 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3ebc645fbf92d88e5d7c56ce745d2dd963c7e740b9cfb31c7edff11fbc1c74b"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 00:29:52 crc kubenswrapper[3556]: I1128 00:29:52.665841 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://c3ebc645fbf92d88e5d7c56ce745d2dd963c7e740b9cfb31c7edff11fbc1c74b" gracePeriod=600 Nov 28 00:29:53 crc kubenswrapper[3556]: I1128 00:29:53.534637 3556 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="c3ebc645fbf92d88e5d7c56ce745d2dd963c7e740b9cfb31c7edff11fbc1c74b" exitCode=0 Nov 28 00:29:53 crc kubenswrapper[3556]: I1128 00:29:53.534701 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"c3ebc645fbf92d88e5d7c56ce745d2dd963c7e740b9cfb31c7edff11fbc1c74b"} Nov 28 00:29:53 crc kubenswrapper[3556]: I1128 00:29:53.535596 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"9cf992a274a0e70310dc3d7d1301a0c527636124f65ae98d66c11396ccb07234"} Nov 28 00:29:53 crc kubenswrapper[3556]: I1128 00:29:53.535724 3556 scope.go:117] "RemoveContainer" containerID="88c4fb4cb642fcbc411ede2f7fa1488222a3e7056a17bfed36ddfaeda62f2163" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.178767 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d"] Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.179550 3556 topology_manager.go:215] "Topology Admit Handler" podUID="1b7f0980-83ea-4569-908f-86e823454cdc" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: E1128 00:30:00.179756 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="abe8683d-994f-42a3-9231-68a39956df37" containerName="manage-dockerfile" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.179772 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe8683d-994f-42a3-9231-68a39956df37" containerName="manage-dockerfile" Nov 28 00:30:00 crc kubenswrapper[3556]: E1128 00:30:00.179798 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="abe8683d-994f-42a3-9231-68a39956df37" containerName="docker-build" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.179810 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe8683d-994f-42a3-9231-68a39956df37" containerName="docker-build" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.179935 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe8683d-994f-42a3-9231-68a39956df37" containerName="docker-build" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.180436 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.183443 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.183503 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.186809 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d"] Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.276708 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnsrx\" (UniqueName: \"kubernetes.io/projected/1b7f0980-83ea-4569-908f-86e823454cdc-kube-api-access-rnsrx\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.277152 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b7f0980-83ea-4569-908f-86e823454cdc-secret-volume\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.277261 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b7f0980-83ea-4569-908f-86e823454cdc-config-volume\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.378564 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b7f0980-83ea-4569-908f-86e823454cdc-secret-volume\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.378656 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b7f0980-83ea-4569-908f-86e823454cdc-config-volume\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.378699 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-rnsrx\" (UniqueName: \"kubernetes.io/projected/1b7f0980-83ea-4569-908f-86e823454cdc-kube-api-access-rnsrx\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.379784 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b7f0980-83ea-4569-908f-86e823454cdc-config-volume\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.386227 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b7f0980-83ea-4569-908f-86e823454cdc-secret-volume\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.394957 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnsrx\" (UniqueName: \"kubernetes.io/projected/1b7f0980-83ea-4569-908f-86e823454cdc-kube-api-access-rnsrx\") pod \"collect-profiles-29404830-mgv7d\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.502864 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:00 crc kubenswrapper[3556]: I1128 00:30:00.703936 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d"] Nov 28 00:30:01 crc kubenswrapper[3556]: I1128 00:30:01.592696 3556 generic.go:334] "Generic (PLEG): container finished" podID="1b7f0980-83ea-4569-908f-86e823454cdc" containerID="161c5857b0f7a43f1f4f3bb3f323a1b9f58c87e72b58123cbe0f6ab18e0e7af6" exitCode=0 Nov 28 00:30:01 crc kubenswrapper[3556]: I1128 00:30:01.592756 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" event={"ID":"1b7f0980-83ea-4569-908f-86e823454cdc","Type":"ContainerDied","Data":"161c5857b0f7a43f1f4f3bb3f323a1b9f58c87e72b58123cbe0f6ab18e0e7af6"} Nov 28 00:30:01 crc kubenswrapper[3556]: I1128 00:30:01.593114 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" event={"ID":"1b7f0980-83ea-4569-908f-86e823454cdc","Type":"ContainerStarted","Data":"158d35e3f907f66060385809c23385765793fb4c75c08d69fb399e021fe6f29f"} Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.799498 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.809926 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b7f0980-83ea-4569-908f-86e823454cdc-config-volume\") pod \"1b7f0980-83ea-4569-908f-86e823454cdc\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.810037 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnsrx\" (UniqueName: \"kubernetes.io/projected/1b7f0980-83ea-4569-908f-86e823454cdc-kube-api-access-rnsrx\") pod \"1b7f0980-83ea-4569-908f-86e823454cdc\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.810082 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b7f0980-83ea-4569-908f-86e823454cdc-secret-volume\") pod \"1b7f0980-83ea-4569-908f-86e823454cdc\" (UID: \"1b7f0980-83ea-4569-908f-86e823454cdc\") " Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.810932 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7f0980-83ea-4569-908f-86e823454cdc-config-volume" (OuterVolumeSpecName: "config-volume") pod "1b7f0980-83ea-4569-908f-86e823454cdc" (UID: "1b7f0980-83ea-4569-908f-86e823454cdc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.816881 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7f0980-83ea-4569-908f-86e823454cdc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1b7f0980-83ea-4569-908f-86e823454cdc" (UID: "1b7f0980-83ea-4569-908f-86e823454cdc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.819310 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7f0980-83ea-4569-908f-86e823454cdc-kube-api-access-rnsrx" (OuterVolumeSpecName: "kube-api-access-rnsrx") pod "1b7f0980-83ea-4569-908f-86e823454cdc" (UID: "1b7f0980-83ea-4569-908f-86e823454cdc"). InnerVolumeSpecName "kube-api-access-rnsrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.911115 3556 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b7f0980-83ea-4569-908f-86e823454cdc-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.911164 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rnsrx\" (UniqueName: \"kubernetes.io/projected/1b7f0980-83ea-4569-908f-86e823454cdc-kube-api-access-rnsrx\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:02 crc kubenswrapper[3556]: I1128 00:30:02.911178 3556 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b7f0980-83ea-4569-908f-86e823454cdc-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:03 crc kubenswrapper[3556]: I1128 00:30:03.638463 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" event={"ID":"1b7f0980-83ea-4569-908f-86e823454cdc","Type":"ContainerDied","Data":"158d35e3f907f66060385809c23385765793fb4c75c08d69fb399e021fe6f29f"} Nov 28 00:30:03 crc kubenswrapper[3556]: I1128 00:30:03.638504 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="158d35e3f907f66060385809c23385765793fb4c75c08d69fb399e021fe6f29f" Nov 28 00:30:03 crc kubenswrapper[3556]: I1128 00:30:03.638507 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404830-mgv7d" Nov 28 00:30:03 crc kubenswrapper[3556]: I1128 00:30:03.885613 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Nov 28 00:30:03 crc kubenswrapper[3556]: I1128 00:30:03.893830 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251935-d7x6j"] Nov 28 00:30:04 crc kubenswrapper[3556]: I1128 00:30:04.918598 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51936587-a4af-470d-ad92-8ab9062cbc72" path="/var/lib/kubelet/pods/51936587-a4af-470d-ad92-8ab9062cbc72/volumes" Nov 28 00:30:12 crc kubenswrapper[3556]: I1128 00:30:12.689953 3556 generic.go:334] "Generic (PLEG): container finished" podID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerID="28546ff4241c237ea08ee724f2e60bb15bd0134a092a93086ee0e832068bad1b" exitCode=0 Nov 28 00:30:12 crc kubenswrapper[3556]: I1128 00:30:12.690249 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5","Type":"ContainerDied","Data":"28546ff4241c237ea08ee724f2e60bb15bd0134a092a93086ee0e832068bad1b"} Nov 28 00:30:13 crc kubenswrapper[3556]: I1128 00:30:13.910111 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.072714 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildcachedir\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.072772 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-ca-bundles\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.072820 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-blob-cache\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.072829 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.072913 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-push\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.072940 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-node-pullsecrets\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.072973 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d6q9\" (UniqueName: \"kubernetes.io/projected/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-kube-api-access-9d6q9\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073043 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-proxy-ca-bundles\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073064 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-root\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073082 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildworkdir\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073108 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-run\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073134 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-system-configs\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073160 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-pull\") pod \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\" (UID: \"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5\") " Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073385 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073790 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.073846 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.074269 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.074802 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.075790 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.077927 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.078802 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-kube-api-access-9d6q9" (OuterVolumeSpecName: "kube-api-access-9d6q9") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "kube-api-access-9d6q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.078986 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.079364 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.174864 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.174903 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.174963 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9d6q9\" (UniqueName: \"kubernetes.io/projected/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-kube-api-access-9d6q9\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.174978 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.174994 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.175022 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.175037 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.175050 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.175062 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.268316 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.275636 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.704993 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"5c65ce5f-58fd-4b83-97dc-e54eefad1ce5","Type":"ContainerDied","Data":"d17aba5177e5662018d1e15a114fcf04fe7dfe38c9847d927b318131bcfe4103"} Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.705072 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d17aba5177e5662018d1e15a114fcf04fe7dfe38c9847d927b318131bcfe4103" Nov 28 00:30:14 crc kubenswrapper[3556]: I1128 00:30:14.705330 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Nov 28 00:30:16 crc kubenswrapper[3556]: I1128 00:30:16.454491 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" (UID: "5c65ce5f-58fd-4b83-97dc-e54eefad1ce5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:30:16 crc kubenswrapper[3556]: I1128 00:30:16.503763 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5c65ce5f-58fd-4b83-97dc-e54eefad1ce5-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:18 crc kubenswrapper[3556]: I1128 00:30:18.707359 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:30:18 crc kubenswrapper[3556]: I1128 00:30:18.707751 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:30:18 crc kubenswrapper[3556]: I1128 00:30:18.707791 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:30:18 crc kubenswrapper[3556]: I1128 00:30:18.707822 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:30:18 crc kubenswrapper[3556]: I1128 00:30:18.707867 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.217145 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.217261 3556 topology_manager.go:215] "Topology Admit Handler" podUID="98984eb9-805f-41c9-ab8a-056decf99e7d" podNamespace="service-telemetry" podName="sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: E1128 00:30:19.217428 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="1b7f0980-83ea-4569-908f-86e823454cdc" containerName="collect-profiles" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.217442 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7f0980-83ea-4569-908f-86e823454cdc" containerName="collect-profiles" Nov 28 00:30:19 crc kubenswrapper[3556]: E1128 00:30:19.217452 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerName="git-clone" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.217460 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerName="git-clone" Nov 28 00:30:19 crc kubenswrapper[3556]: E1128 00:30:19.217485 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerName="docker-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.217494 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerName="docker-build" Nov 28 00:30:19 crc kubenswrapper[3556]: E1128 00:30:19.217507 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerName="manage-dockerfile" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.217515 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerName="manage-dockerfile" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.217911 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c65ce5f-58fd-4b83-97dc-e54eefad1ce5" containerName="docker-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.217936 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7f0980-83ea-4569-908f-86e823454cdc" containerName="collect-profiles" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.218726 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.222521 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ps7tk" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.222549 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.222577 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.222756 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.230700 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.341820 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: E1128 00:30:19.342185 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373\": container with ID starting with 13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373 not found: ID does not exist" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342219 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342247 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373" err="rpc error: code = NotFound desc = could not find container \"13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373\": container with ID starting with 13053062c85d9edb3365e456db12e124816e6411643a8553c324352ece2c7373 not found: ID does not exist" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342257 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342294 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342325 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-push\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342354 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342388 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-pull\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342423 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sxr5\" (UniqueName: \"kubernetes.io/projected/98984eb9-805f-41c9-ab8a-056decf99e7d-kube-api-access-4sxr5\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342461 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342490 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342529 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.342564 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.443771 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.443834 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.443864 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.443895 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.443924 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.443952 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-push\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.443976 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.444004 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-pull\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.444058 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4sxr5\" (UniqueName: \"kubernetes.io/projected/98984eb9-805f-41c9-ab8a-056decf99e7d-kube-api-access-4sxr5\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.444100 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.444130 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.444535 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.444565 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-buildworkdir\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.444694 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.444704 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-buildcachedir\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.445123 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-system-configs\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.445235 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.445326 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-root\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.445825 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-run\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.446204 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.446290 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.451861 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-pull\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.457163 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-push\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.476812 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sxr5\" (UniqueName: \"kubernetes.io/projected/98984eb9-805f-41c9-ab8a-056decf99e7d-kube-api-access-4sxr5\") pod \"sg-core-1-build\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.534707 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 28 00:30:19 crc kubenswrapper[3556]: I1128 00:30:19.957423 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 28 00:30:20 crc kubenswrapper[3556]: I1128 00:30:20.743098 3556 generic.go:334] "Generic (PLEG): container finished" podID="98984eb9-805f-41c9-ab8a-056decf99e7d" containerID="a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c" exitCode=0 Nov 28 00:30:20 crc kubenswrapper[3556]: I1128 00:30:20.743135 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"98984eb9-805f-41c9-ab8a-056decf99e7d","Type":"ContainerDied","Data":"a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c"} Nov 28 00:30:20 crc kubenswrapper[3556]: I1128 00:30:20.743156 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"98984eb9-805f-41c9-ab8a-056decf99e7d","Type":"ContainerStarted","Data":"650076a18a7bd148cf18e3cb6afc00696560ec36f027ac48037a81e0f9c0cdc1"} Nov 28 00:30:21 crc kubenswrapper[3556]: I1128 00:30:21.752920 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"98984eb9-805f-41c9-ab8a-056decf99e7d","Type":"ContainerStarted","Data":"f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56"} Nov 28 00:30:21 crc kubenswrapper[3556]: I1128 00:30:21.793938 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=2.79386967 podStartE2EDuration="2.79386967s" podCreationTimestamp="2025-11-28 00:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:30:21.777606986 +0000 UTC m=+1083.369838986" watchObservedRunningTime="2025-11-28 00:30:21.79386967 +0000 UTC m=+1083.386101730" Nov 28 00:30:29 crc kubenswrapper[3556]: I1128 00:30:29.279957 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 28 00:30:29 crc kubenswrapper[3556]: I1128 00:30:29.280793 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="98984eb9-805f-41c9-ab8a-056decf99e7d" containerName="docker-build" containerID="cri-o://f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56" gracePeriod=30 Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.706736 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_98984eb9-805f-41c9-ab8a-056decf99e7d/docker-build/0.log" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.707340 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.799209 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_98984eb9-805f-41c9-ab8a-056decf99e7d/docker-build/0.log" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.799813 3556 generic.go:334] "Generic (PLEG): container finished" podID="98984eb9-805f-41c9-ab8a-056decf99e7d" containerID="f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56" exitCode=1 Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.799874 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"98984eb9-805f-41c9-ab8a-056decf99e7d","Type":"ContainerDied","Data":"f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56"} Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.799888 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.799914 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"98984eb9-805f-41c9-ab8a-056decf99e7d","Type":"ContainerDied","Data":"650076a18a7bd148cf18e3cb6afc00696560ec36f027ac48037a81e0f9c0cdc1"} Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.799944 3556 scope.go:117] "RemoveContainer" containerID="f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807527 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-root\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807627 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-push\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807671 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-proxy-ca-bundles\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807727 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-system-configs\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807801 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sxr5\" (UniqueName: \"kubernetes.io/projected/98984eb9-805f-41c9-ab8a-056decf99e7d-kube-api-access-4sxr5\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807843 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-run\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807883 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-buildcachedir\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807930 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-node-pullsecrets\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.807988 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-ca-bundles\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.808090 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-build-blob-cache\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.808148 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-buildworkdir\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.808199 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-pull\") pod \"98984eb9-805f-41c9-ab8a-056decf99e7d\" (UID: \"98984eb9-805f-41c9-ab8a-056decf99e7d\") " Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.809597 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.809881 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.809934 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.810520 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.811123 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.811358 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.812834 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.820093 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.825824 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98984eb9-805f-41c9-ab8a-056decf99e7d-kube-api-access-4sxr5" (OuterVolumeSpecName: "kube-api-access-4sxr5") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "kube-api-access-4sxr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.826701 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.906783 3556 scope.go:117] "RemoveContainer" containerID="a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.909959 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.909990 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.910007 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/98984eb9-805f-41c9-ab8a-056decf99e7d-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.910040 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.910055 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.910070 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4sxr5\" (UniqueName: \"kubernetes.io/projected/98984eb9-805f-41c9-ab8a-056decf99e7d-kube-api-access-4sxr5\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.910083 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.910100 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.910113 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/98984eb9-805f-41c9-ab8a-056decf99e7d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.910128 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98984eb9-805f-41c9-ab8a-056decf99e7d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.942548 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.948697 3556 scope.go:117] "RemoveContainer" containerID="f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56" Nov 28 00:30:30 crc kubenswrapper[3556]: E1128 00:30:30.949491 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56\": container with ID starting with f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56 not found: ID does not exist" containerID="f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.949549 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56"} err="failed to get container status \"f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56\": rpc error: code = NotFound desc = could not find container \"f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56\": container with ID starting with f0a5d6e349c392a22ead13ee371cc64dcb7a820071e5fd25e7f1163790488e56 not found: ID does not exist" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.949563 3556 scope.go:117] "RemoveContainer" containerID="a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c" Nov 28 00:30:30 crc kubenswrapper[3556]: E1128 00:30:30.950112 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c\": container with ID starting with a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c not found: ID does not exist" containerID="a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.950175 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c"} err="failed to get container status \"a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c\": rpc error: code = NotFound desc = could not find container \"a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c\": container with ID starting with a8cb2bd2f422cd6518cc5a9914e8763f23c64a01856ac4a34260bf484c10bd7c not found: ID does not exist" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.969911 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "98984eb9-805f-41c9-ab8a-056decf99e7d" (UID: "98984eb9-805f-41c9-ab8a-056decf99e7d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.992076 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.992208 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9c8b376f-89cb-42dc-8799-4af1da92cc07" podNamespace="service-telemetry" podName="sg-core-2-build" Nov 28 00:30:30 crc kubenswrapper[3556]: E1128 00:30:30.992377 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="98984eb9-805f-41c9-ab8a-056decf99e7d" containerName="docker-build" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.992394 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="98984eb9-805f-41c9-ab8a-056decf99e7d" containerName="docker-build" Nov 28 00:30:30 crc kubenswrapper[3556]: E1128 00:30:30.992415 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="98984eb9-805f-41c9-ab8a-056decf99e7d" containerName="manage-dockerfile" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.992423 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="98984eb9-805f-41c9-ab8a-056decf99e7d" containerName="manage-dockerfile" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.992583 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="98984eb9-805f-41c9-ab8a-056decf99e7d" containerName="docker-build" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.993496 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.998716 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.998717 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Nov 28 00:30:30 crc kubenswrapper[3556]: I1128 00:30:30.998761 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.006484 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.033818 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-root\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.033884 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.033926 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-push\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.033961 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-pull\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.033994 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034035 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034070 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-system-configs\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034095 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildcachedir\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034120 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwzg\" (UniqueName: \"kubernetes.io/projected/9c8b376f-89cb-42dc-8799-4af1da92cc07-kube-api-access-lxwzg\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034147 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-run\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034179 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildworkdir\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034218 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034281 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.034297 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/98984eb9-805f-41c9-ab8a-056decf99e7d-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135045 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135092 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-push\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135115 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-pull\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135139 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135170 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135206 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-system-configs\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135232 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildcachedir\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135259 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lxwzg\" (UniqueName: \"kubernetes.io/projected/9c8b376f-89cb-42dc-8799-4af1da92cc07-kube-api-access-lxwzg\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135289 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-run\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135319 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildworkdir\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135351 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135400 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-root\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135748 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135758 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-root\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135849 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.135893 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildcachedir\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.136131 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.136132 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-run\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.136147 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-system-configs\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.136329 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildworkdir\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.137615 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.140023 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-pull\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.140201 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-push\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.145550 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.151673 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.159190 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxwzg\" (UniqueName: \"kubernetes.io/projected/9c8b376f-89cb-42dc-8799-4af1da92cc07-kube-api-access-lxwzg\") pod \"sg-core-2-build\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.357163 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.602100 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Nov 28 00:30:31 crc kubenswrapper[3556]: W1128 00:30:31.611542 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8b376f_89cb_42dc_8799_4af1da92cc07.slice/crio-9d4c04ababdab7d5c3bd9e6bdb30a1186e33953d71c5e7d2a9136481621cb122 WatchSource:0}: Error finding container 9d4c04ababdab7d5c3bd9e6bdb30a1186e33953d71c5e7d2a9136481621cb122: Status 404 returned error can't find the container with id 9d4c04ababdab7d5c3bd9e6bdb30a1186e33953d71c5e7d2a9136481621cb122 Nov 28 00:30:31 crc kubenswrapper[3556]: I1128 00:30:31.806603 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"9c8b376f-89cb-42dc-8799-4af1da92cc07","Type":"ContainerStarted","Data":"9d4c04ababdab7d5c3bd9e6bdb30a1186e33953d71c5e7d2a9136481621cb122"} Nov 28 00:30:32 crc kubenswrapper[3556]: I1128 00:30:32.815724 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"9c8b376f-89cb-42dc-8799-4af1da92cc07","Type":"ContainerStarted","Data":"e4a50690959750a14b5823aa2f01230aa965d1f1ced7269a1ee8e59705d0f46f"} Nov 28 00:30:32 crc kubenswrapper[3556]: I1128 00:30:32.929413 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98984eb9-805f-41c9-ab8a-056decf99e7d" path="/var/lib/kubelet/pods/98984eb9-805f-41c9-ab8a-056decf99e7d/volumes" Nov 28 00:30:33 crc kubenswrapper[3556]: I1128 00:30:33.824767 3556 generic.go:334] "Generic (PLEG): container finished" podID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerID="e4a50690959750a14b5823aa2f01230aa965d1f1ced7269a1ee8e59705d0f46f" exitCode=0 Nov 28 00:30:33 crc kubenswrapper[3556]: I1128 00:30:33.824824 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"9c8b376f-89cb-42dc-8799-4af1da92cc07","Type":"ContainerDied","Data":"e4a50690959750a14b5823aa2f01230aa965d1f1ced7269a1ee8e59705d0f46f"} Nov 28 00:30:34 crc kubenswrapper[3556]: I1128 00:30:34.832291 3556 generic.go:334] "Generic (PLEG): container finished" podID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerID="f742244154e7b9eff5b47af2cf0313944347979236a80f1305d681b896823b1c" exitCode=0 Nov 28 00:30:34 crc kubenswrapper[3556]: I1128 00:30:34.832379 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"9c8b376f-89cb-42dc-8799-4af1da92cc07","Type":"ContainerDied","Data":"f742244154e7b9eff5b47af2cf0313944347979236a80f1305d681b896823b1c"} Nov 28 00:30:34 crc kubenswrapper[3556]: I1128 00:30:34.877732 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_9c8b376f-89cb-42dc-8799-4af1da92cc07/manage-dockerfile/0.log" Nov 28 00:30:35 crc kubenswrapper[3556]: I1128 00:30:35.842830 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"9c8b376f-89cb-42dc-8799-4af1da92cc07","Type":"ContainerStarted","Data":"1a9a75ccfa2f9b39a45b6ce0908ed3c57af2c433e82e650fc9ce55836d686382"} Nov 28 00:31:18 crc kubenswrapper[3556]: I1128 00:31:18.709110 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:31:18 crc kubenswrapper[3556]: I1128 00:31:18.710144 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:31:18 crc kubenswrapper[3556]: I1128 00:31:18.710188 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:31:18 crc kubenswrapper[3556]: I1128 00:31:18.710277 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:31:18 crc kubenswrapper[3556]: I1128 00:31:18.710360 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:31:52 crc kubenswrapper[3556]: I1128 00:31:52.664596 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:31:52 crc kubenswrapper[3556]: I1128 00:31:52.665214 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:32:18 crc kubenswrapper[3556]: I1128 00:32:18.711175 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:32:18 crc kubenswrapper[3556]: I1128 00:32:18.711749 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:32:18 crc kubenswrapper[3556]: I1128 00:32:18.711801 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:32:18 crc kubenswrapper[3556]: I1128 00:32:18.711862 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:32:18 crc kubenswrapper[3556]: I1128 00:32:18.711901 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:32:22 crc kubenswrapper[3556]: I1128 00:32:22.663938 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:32:22 crc kubenswrapper[3556]: I1128 00:32:22.664289 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:32:52 crc kubenswrapper[3556]: I1128 00:32:52.664082 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:32:52 crc kubenswrapper[3556]: I1128 00:32:52.664616 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:32:52 crc kubenswrapper[3556]: I1128 00:32:52.664655 3556 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:32:52 crc kubenswrapper[3556]: I1128 00:32:52.665373 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cf992a274a0e70310dc3d7d1301a0c527636124f65ae98d66c11396ccb07234"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 00:32:52 crc kubenswrapper[3556]: I1128 00:32:52.665539 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://9cf992a274a0e70310dc3d7d1301a0c527636124f65ae98d66c11396ccb07234" gracePeriod=600 Nov 28 00:32:53 crc kubenswrapper[3556]: I1128 00:32:53.824044 3556 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="9cf992a274a0e70310dc3d7d1301a0c527636124f65ae98d66c11396ccb07234" exitCode=0 Nov 28 00:32:53 crc kubenswrapper[3556]: I1128 00:32:53.824087 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"9cf992a274a0e70310dc3d7d1301a0c527636124f65ae98d66c11396ccb07234"} Nov 28 00:32:53 crc kubenswrapper[3556]: I1128 00:32:53.824113 3556 scope.go:117] "RemoveContainer" containerID="c3ebc645fbf92d88e5d7c56ce745d2dd963c7e740b9cfb31c7edff11fbc1c74b" Nov 28 00:32:54 crc kubenswrapper[3556]: I1128 00:32:54.831931 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"77ce0f6a03e4a0ff03abcc42291734e51c9965a62271e2d0ca1f6177a9180a17"} Nov 28 00:32:54 crc kubenswrapper[3556]: I1128 00:32:54.857419 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=144.857378395 podStartE2EDuration="2m24.857378395s" podCreationTimestamp="2025-11-28 00:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:30:35.893920295 +0000 UTC m=+1097.486152315" watchObservedRunningTime="2025-11-28 00:32:54.857378395 +0000 UTC m=+1236.449610395" Nov 28 00:33:18 crc kubenswrapper[3556]: I1128 00:33:18.712624 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:33:18 crc kubenswrapper[3556]: I1128 00:33:18.713070 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:33:18 crc kubenswrapper[3556]: I1128 00:33:18.713097 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:33:18 crc kubenswrapper[3556]: I1128 00:33:18.713126 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:33:18 crc kubenswrapper[3556]: I1128 00:33:18.713152 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:33:30 crc kubenswrapper[3556]: I1128 00:33:30.131553 3556 patch_prober.go:28] interesting pod/console-644bb77b49-5x5xk container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.73:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:33:30 crc kubenswrapper[3556]: I1128 00:33:30.132722 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-644bb77b49-5x5xk" podUID="9e649ef6-bbda-4ad9-8a09-ac3803dd0cc1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.73:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 00:34:06 crc kubenswrapper[3556]: I1128 00:34:06.228034 3556 generic.go:334] "Generic (PLEG): container finished" podID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerID="1a9a75ccfa2f9b39a45b6ce0908ed3c57af2c433e82e650fc9ce55836d686382" exitCode=0 Nov 28 00:34:06 crc kubenswrapper[3556]: I1128 00:34:06.228166 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"9c8b376f-89cb-42dc-8799-4af1da92cc07","Type":"ContainerDied","Data":"1a9a75ccfa2f9b39a45b6ce0908ed3c57af2c433e82e650fc9ce55836d686382"} Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.520051 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604040 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildworkdir\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604123 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxwzg\" (UniqueName: \"kubernetes.io/projected/9c8b376f-89cb-42dc-8799-4af1da92cc07-kube-api-access-lxwzg\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604159 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-ca-bundles\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604203 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildcachedir\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604243 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-node-pullsecrets\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604290 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-root\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604299 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604335 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-push\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604344 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604364 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-proxy-ca-bundles\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604445 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-pull\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604474 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-system-configs\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604511 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-blob-cache\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604547 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-run\") pod \"9c8b376f-89cb-42dc-8799-4af1da92cc07\" (UID: \"9c8b376f-89cb-42dc-8799-4af1da92cc07\") " Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604832 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.604864 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9c8b376f-89cb-42dc-8799-4af1da92cc07-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.605069 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.605230 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.605486 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.606072 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.609475 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.609824 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.610211 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8b376f-89cb-42dc-8799-4af1da92cc07-kube-api-access-lxwzg" (OuterVolumeSpecName: "kube-api-access-lxwzg") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "kube-api-access-lxwzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.614127 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.706197 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lxwzg\" (UniqueName: \"kubernetes.io/projected/9c8b376f-89cb-42dc-8799-4af1da92cc07-kube-api-access-lxwzg\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.706564 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.706710 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.706851 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.706989 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/9c8b376f-89cb-42dc-8799-4af1da92cc07-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.707148 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.707294 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.707430 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.861692 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:34:07 crc kubenswrapper[3556]: I1128 00:34:07.910211 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:08 crc kubenswrapper[3556]: I1128 00:34:08.248266 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"9c8b376f-89cb-42dc-8799-4af1da92cc07","Type":"ContainerDied","Data":"9d4c04ababdab7d5c3bd9e6bdb30a1186e33953d71c5e7d2a9136481621cb122"} Nov 28 00:34:08 crc kubenswrapper[3556]: I1128 00:34:08.248297 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d4c04ababdab7d5c3bd9e6bdb30a1186e33953d71c5e7d2a9136481621cb122" Nov 28 00:34:08 crc kubenswrapper[3556]: I1128 00:34:08.248588 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Nov 28 00:34:10 crc kubenswrapper[3556]: I1128 00:34:10.191814 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9c8b376f-89cb-42dc-8799-4af1da92cc07" (UID: "9c8b376f-89cb-42dc-8799-4af1da92cc07"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:34:10 crc kubenswrapper[3556]: I1128 00:34:10.241101 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9c8b376f-89cb-42dc-8799-4af1da92cc07-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.240809 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.241363 3556 topology_manager.go:215] "Topology Admit Handler" podUID="e6931bb0-4de9-46b9-b110-7e278b4299d8" podNamespace="service-telemetry" podName="sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: E1128 00:34:12.241627 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerName="git-clone" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.241645 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerName="git-clone" Nov 28 00:34:12 crc kubenswrapper[3556]: E1128 00:34:12.241662 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerName="manage-dockerfile" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.241674 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerName="manage-dockerfile" Nov 28 00:34:12 crc kubenswrapper[3556]: E1128 00:34:12.241702 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerName="docker-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.241714 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerName="docker-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.241881 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8b376f-89cb-42dc-8799-4af1da92cc07" containerName="docker-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.242876 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.246585 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.246948 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.247133 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.248381 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ps7tk" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.255565 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.268747 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.268812 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.268854 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.268882 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9wj9\" (UniqueName: \"kubernetes.io/projected/e6931bb0-4de9-46b9-b110-7e278b4299d8-kube-api-access-p9wj9\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.268922 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.268950 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.268979 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-push\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.269034 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-pull\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.269067 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.269099 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.269143 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.269178 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370289 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370408 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370458 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370530 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370583 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370641 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370680 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-p9wj9\" (UniqueName: \"kubernetes.io/projected/e6931bb0-4de9-46b9-b110-7e278b4299d8-kube-api-access-p9wj9\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370736 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370740 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370774 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370824 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-push\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370890 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-pull\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.370953 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.371124 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.371237 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.371296 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.371371 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.371369 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.371626 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.372463 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.372908 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.378862 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-push\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.378985 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-pull\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.407818 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9wj9\" (UniqueName: \"kubernetes.io/projected/e6931bb0-4de9-46b9-b110-7e278b4299d8-kube-api-access-p9wj9\") pod \"sg-bridge-1-build\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.561576 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:12 crc kubenswrapper[3556]: I1128 00:34:12.808195 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 28 00:34:13 crc kubenswrapper[3556]: I1128 00:34:13.280185 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e6931bb0-4de9-46b9-b110-7e278b4299d8","Type":"ContainerStarted","Data":"2bc2fcbb098576a9788fa9f93f81fed6c34f6088af1f4e0258c7aadb17d00970"} Nov 28 00:34:14 crc kubenswrapper[3556]: I1128 00:34:14.287251 3556 generic.go:334] "Generic (PLEG): container finished" podID="e6931bb0-4de9-46b9-b110-7e278b4299d8" containerID="5c1d45918ce272aa26b6ef8a1fbc964c50e19f3c8d9bf9757f660f35937e0260" exitCode=0 Nov 28 00:34:14 crc kubenswrapper[3556]: I1128 00:34:14.287326 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e6931bb0-4de9-46b9-b110-7e278b4299d8","Type":"ContainerDied","Data":"5c1d45918ce272aa26b6ef8a1fbc964c50e19f3c8d9bf9757f660f35937e0260"} Nov 28 00:34:15 crc kubenswrapper[3556]: I1128 00:34:15.296027 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e6931bb0-4de9-46b9-b110-7e278b4299d8","Type":"ContainerStarted","Data":"f717bf8c9db4c39b33f1caede8a047a4a47599b717d461b16b51d57d72983f2e"} Nov 28 00:34:15 crc kubenswrapper[3556]: I1128 00:34:15.331482 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.331414442 podStartE2EDuration="3.331414442s" podCreationTimestamp="2025-11-28 00:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:34:15.327574802 +0000 UTC m=+1316.919806852" watchObservedRunningTime="2025-11-28 00:34:15.331414442 +0000 UTC m=+1316.923646462" Nov 28 00:34:18 crc kubenswrapper[3556]: I1128 00:34:18.713634 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:34:18 crc kubenswrapper[3556]: I1128 00:34:18.713940 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:34:18 crc kubenswrapper[3556]: I1128 00:34:18.713971 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:34:18 crc kubenswrapper[3556]: I1128 00:34:18.714003 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:34:18 crc kubenswrapper[3556]: I1128 00:34:18.714065 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:34:22 crc kubenswrapper[3556]: I1128 00:34:22.391813 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 28 00:34:22 crc kubenswrapper[3556]: I1128 00:34:22.393822 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="e6931bb0-4de9-46b9-b110-7e278b4299d8" containerName="docker-build" containerID="cri-o://f717bf8c9db4c39b33f1caede8a047a4a47599b717d461b16b51d57d72983f2e" gracePeriod=30 Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.343667 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e6931bb0-4de9-46b9-b110-7e278b4299d8/docker-build/0.log" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.344730 3556 generic.go:334] "Generic (PLEG): container finished" podID="e6931bb0-4de9-46b9-b110-7e278b4299d8" containerID="f717bf8c9db4c39b33f1caede8a047a4a47599b717d461b16b51d57d72983f2e" exitCode=1 Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.344792 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e6931bb0-4de9-46b9-b110-7e278b4299d8","Type":"ContainerDied","Data":"f717bf8c9db4c39b33f1caede8a047a4a47599b717d461b16b51d57d72983f2e"} Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.912367 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e6931bb0-4de9-46b9-b110-7e278b4299d8/docker-build/0.log" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.912971 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.943736 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-node-pullsecrets\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.943811 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildcachedir\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.943851 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-pull\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.976947 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.977055 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildworkdir\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.977168 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-run\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.977259 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-system-configs\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.977320 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-push\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.977402 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-blob-cache\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.977465 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-proxy-ca-bundles\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.978211 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.977136 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.978582 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.978900 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.979004 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-ca-bundles\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.979106 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-root\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.979184 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9wj9\" (UniqueName: \"kubernetes.io/projected/e6931bb0-4de9-46b9-b110-7e278b4299d8-kube-api-access-p9wj9\") pod \"e6931bb0-4de9-46b9-b110-7e278b4299d8\" (UID: \"e6931bb0-4de9-46b9-b110-7e278b4299d8\") " Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.981058 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.981089 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.981347 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.981376 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.981392 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.985187 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6931bb0-4de9-46b9-b110-7e278b4299d8-kube-api-access-p9wj9" (OuterVolumeSpecName: "kube-api-access-p9wj9") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "kube-api-access-p9wj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.985324 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.987584 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:34:23 crc kubenswrapper[3556]: I1128 00:34:23.988204 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.007092 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.046259 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.046379 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" podNamespace="service-telemetry" podName="sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: E1128 00:34:24.046521 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="e6931bb0-4de9-46b9-b110-7e278b4299d8" containerName="manage-dockerfile" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.046532 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6931bb0-4de9-46b9-b110-7e278b4299d8" containerName="manage-dockerfile" Nov 28 00:34:24 crc kubenswrapper[3556]: E1128 00:34:24.046542 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="e6931bb0-4de9-46b9-b110-7e278b4299d8" containerName="docker-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.046549 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6931bb0-4de9-46b9-b110-7e278b4299d8" containerName="docker-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.046650 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6931bb0-4de9-46b9-b110-7e278b4299d8" containerName="docker-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.050693 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.057947 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.058874 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.059079 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.066442 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.082527 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.082566 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p9wj9\" (UniqueName: \"kubernetes.io/projected/e6931bb0-4de9-46b9-b110-7e278b4299d8-kube-api-access-p9wj9\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.082582 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.082597 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.082612 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/e6931bb0-4de9-46b9-b110-7e278b4299d8-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.095685 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184090 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184193 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184272 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-push\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184477 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184543 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-pull\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184657 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184780 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwng\" (UniqueName: \"kubernetes.io/projected/ef9cd30a-cbf8-44a6-8851-5609a50c1498-kube-api-access-4mwng\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184827 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.184964 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.185053 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.185096 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.185179 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.185320 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.286483 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.286542 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.286565 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.286589 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.286616 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.286660 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.286682 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-push\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.286847 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287152 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287247 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287287 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287313 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-pull\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287337 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287361 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwng\" (UniqueName: \"kubernetes.io/projected/ef9cd30a-cbf8-44a6-8851-5609a50c1498-kube-api-access-4mwng\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287371 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287381 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287489 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287691 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287835 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.287943 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.288290 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.290834 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-push\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.290914 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-pull\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.305595 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwng\" (UniqueName: \"kubernetes.io/projected/ef9cd30a-cbf8-44a6-8851-5609a50c1498-kube-api-access-4mwng\") pod \"sg-bridge-2-build\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.352067 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_e6931bb0-4de9-46b9-b110-7e278b4299d8/docker-build/0.log" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.352504 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"e6931bb0-4de9-46b9-b110-7e278b4299d8","Type":"ContainerDied","Data":"2bc2fcbb098576a9788fa9f93f81fed6c34f6088af1f4e0258c7aadb17d00970"} Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.352535 3556 scope.go:117] "RemoveContainer" containerID="f717bf8c9db4c39b33f1caede8a047a4a47599b717d461b16b51d57d72983f2e" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.352603 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.371221 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.378248 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e6931bb0-4de9-46b9-b110-7e278b4299d8" (UID: "e6931bb0-4de9-46b9-b110-7e278b4299d8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.388680 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e6931bb0-4de9-46b9-b110-7e278b4299d8-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.415562 3556 scope.go:117] "RemoveContainer" containerID="5c1d45918ce272aa26b6ef8a1fbc964c50e19f3c8d9bf9757f660f35937e0260" Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.697243 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.699967 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.791223 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Nov 28 00:34:24 crc kubenswrapper[3556]: I1128 00:34:24.927005 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6931bb0-4de9-46b9-b110-7e278b4299d8" path="/var/lib/kubelet/pods/e6931bb0-4de9-46b9-b110-7e278b4299d8/volumes" Nov 28 00:34:25 crc kubenswrapper[3556]: I1128 00:34:25.361444 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"ef9cd30a-cbf8-44a6-8851-5609a50c1498","Type":"ContainerStarted","Data":"d93f505ed67e8cb8c5257428d8a67d1d1322d6b77f8e35b695048e6a5eb4a6b1"} Nov 28 00:34:25 crc kubenswrapper[3556]: I1128 00:34:25.361526 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"ef9cd30a-cbf8-44a6-8851-5609a50c1498","Type":"ContainerStarted","Data":"09fa6f6926120c6af0ca43af7639d8dc623635c5e94fd573495e9f90f1e94527"} Nov 28 00:34:26 crc kubenswrapper[3556]: I1128 00:34:26.372525 3556 generic.go:334] "Generic (PLEG): container finished" podID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerID="d93f505ed67e8cb8c5257428d8a67d1d1322d6b77f8e35b695048e6a5eb4a6b1" exitCode=0 Nov 28 00:34:26 crc kubenswrapper[3556]: I1128 00:34:26.373081 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"ef9cd30a-cbf8-44a6-8851-5609a50c1498","Type":"ContainerDied","Data":"d93f505ed67e8cb8c5257428d8a67d1d1322d6b77f8e35b695048e6a5eb4a6b1"} Nov 28 00:34:27 crc kubenswrapper[3556]: I1128 00:34:27.381614 3556 generic.go:334] "Generic (PLEG): container finished" podID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerID="bfcb60766f4f7172a0bf25e33c479e504e817c6a39ef919cac908985ed989c81" exitCode=0 Nov 28 00:34:27 crc kubenswrapper[3556]: I1128 00:34:27.381718 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"ef9cd30a-cbf8-44a6-8851-5609a50c1498","Type":"ContainerDied","Data":"bfcb60766f4f7172a0bf25e33c479e504e817c6a39ef919cac908985ed989c81"} Nov 28 00:34:27 crc kubenswrapper[3556]: I1128 00:34:27.451915 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_ef9cd30a-cbf8-44a6-8851-5609a50c1498/manage-dockerfile/0.log" Nov 28 00:34:28 crc kubenswrapper[3556]: I1128 00:34:28.391067 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"ef9cd30a-cbf8-44a6-8851-5609a50c1498","Type":"ContainerStarted","Data":"42af246b0c7161ed2bc3490b55f6b7298cc4302eec172ccc15cf890b14261bd4"} Nov 28 00:34:28 crc kubenswrapper[3556]: I1128 00:34:28.459475 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=4.459436811 podStartE2EDuration="4.459436811s" podCreationTimestamp="2025-11-28 00:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:34:28.455751664 +0000 UTC m=+1330.047983664" watchObservedRunningTime="2025-11-28 00:34:28.459436811 +0000 UTC m=+1330.051668811" Nov 28 00:35:05 crc kubenswrapper[3556]: I1128 00:35:05.054522 3556 patch_prober.go:28] interesting pod/openshift-config-operator-77658b5b66-dq5sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:35:05 crc kubenswrapper[3556]: I1128 00:35:05.055229 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-77658b5b66-dq5sc" podUID="530553aa-0a1d-423e-8a22-f5eb4bdbb883" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:35:18 crc kubenswrapper[3556]: I1128 00:35:18.714466 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:35:18 crc kubenswrapper[3556]: I1128 00:35:18.715219 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:35:18 crc kubenswrapper[3556]: I1128 00:35:18.715278 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:35:18 crc kubenswrapper[3556]: I1128 00:35:18.715312 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:35:18 crc kubenswrapper[3556]: I1128 00:35:18.715347 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:35:22 crc kubenswrapper[3556]: I1128 00:35:22.664628 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:35:22 crc kubenswrapper[3556]: I1128 00:35:22.665134 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:35:29 crc kubenswrapper[3556]: I1128 00:35:29.808557 3556 generic.go:334] "Generic (PLEG): container finished" podID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerID="42af246b0c7161ed2bc3490b55f6b7298cc4302eec172ccc15cf890b14261bd4" exitCode=0 Nov 28 00:35:29 crc kubenswrapper[3556]: I1128 00:35:29.808654 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"ef9cd30a-cbf8-44a6-8851-5609a50c1498","Type":"ContainerDied","Data":"42af246b0c7161ed2bc3490b55f6b7298cc4302eec172ccc15cf890b14261bd4"} Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.149356 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215454 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildworkdir\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215534 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-proxy-ca-bundles\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215583 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-blob-cache\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215654 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-node-pullsecrets\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215714 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-ca-bundles\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215757 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-run\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215812 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-system-configs\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215856 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-push\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215891 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildcachedir\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.215947 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-root\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.216004 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mwng\" (UniqueName: \"kubernetes.io/projected/ef9cd30a-cbf8-44a6-8851-5609a50c1498-kube-api-access-4mwng\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.216085 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-pull\") pod \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\" (UID: \"ef9cd30a-cbf8-44a6-8851-5609a50c1498\") " Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.216220 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.216328 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.216402 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.216887 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.217088 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.217164 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.217877 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.220687 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.220744 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.220759 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.220774 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.220789 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.220800 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.220812 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.222540 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.223162 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9cd30a-cbf8-44a6-8851-5609a50c1498-kube-api-access-4mwng" (OuterVolumeSpecName: "kube-api-access-4mwng") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "kube-api-access-4mwng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.223898 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.322435 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.322499 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/ef9cd30a-cbf8-44a6-8851-5609a50c1498-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.322515 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4mwng\" (UniqueName: \"kubernetes.io/projected/ef9cd30a-cbf8-44a6-8851-5609a50c1498-kube-api-access-4mwng\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.329084 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.424908 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.829065 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"ef9cd30a-cbf8-44a6-8851-5609a50c1498","Type":"ContainerDied","Data":"09fa6f6926120c6af0ca43af7639d8dc623635c5e94fd573495e9f90f1e94527"} Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.829109 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09fa6f6926120c6af0ca43af7639d8dc623635c5e94fd573495e9f90f1e94527" Nov 28 00:35:31 crc kubenswrapper[3556]: I1128 00:35:31.829170 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Nov 28 00:35:32 crc kubenswrapper[3556]: I1128 00:35:32.118662 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ef9cd30a-cbf8-44a6-8851-5609a50c1498" (UID: "ef9cd30a-cbf8-44a6-8851-5609a50c1498"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:35:32 crc kubenswrapper[3556]: I1128 00:35:32.136298 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ef9cd30a-cbf8-44a6-8851-5609a50c1498-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.868503 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.869086 3556 topology_manager.go:215] "Topology Admit Handler" podUID="43d1a23f-c67b-4353-872e-1905f4381a4c" podNamespace="service-telemetry" podName="prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: E1128 00:35:36.869327 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerName="manage-dockerfile" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.869345 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerName="manage-dockerfile" Nov 28 00:35:36 crc kubenswrapper[3556]: E1128 00:35:36.869376 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerName="docker-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.869389 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerName="docker-build" Nov 28 00:35:36 crc kubenswrapper[3556]: E1128 00:35:36.869410 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerName="git-clone" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.869422 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerName="git-clone" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.869590 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9cd30a-cbf8-44a6-8851-5609a50c1498" containerName="docker-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.870542 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.876114 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.876322 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.876493 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ps7tk" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.876608 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.883074 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.908161 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.908532 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.908563 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.908616 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.908649 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.908682 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qb22\" (UniqueName: \"kubernetes.io/projected/43d1a23f-c67b-4353-872e-1905f4381a4c-kube-api-access-6qb22\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.908910 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.908988 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.909171 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.909275 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.909351 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:36 crc kubenswrapper[3556]: I1128 00:35:36.909395 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.011761 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-6qb22\" (UniqueName: \"kubernetes.io/projected/43d1a23f-c67b-4353-872e-1905f4381a4c-kube-api-access-6qb22\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.011867 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.011946 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.011993 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012090 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012142 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012197 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012268 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012321 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012469 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012508 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012528 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012536 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012587 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012717 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012799 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.012859 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.013071 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.013307 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.013772 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.013832 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.018132 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.028872 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.033672 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qb22\" (UniqueName: \"kubernetes.io/projected/43d1a23f-c67b-4353-872e-1905f4381a4c-kube-api-access-6qb22\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.193942 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.419636 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 28 00:35:37 crc kubenswrapper[3556]: I1128 00:35:37.873491 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"43d1a23f-c67b-4353-872e-1905f4381a4c","Type":"ContainerStarted","Data":"4c9c0e04724b2145f1922b5b19e53dce9084d1f72db290c88566d8179c37309d"} Nov 28 00:35:38 crc kubenswrapper[3556]: I1128 00:35:38.884665 3556 generic.go:334] "Generic (PLEG): container finished" podID="43d1a23f-c67b-4353-872e-1905f4381a4c" containerID="55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559" exitCode=0 Nov 28 00:35:38 crc kubenswrapper[3556]: I1128 00:35:38.885045 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"43d1a23f-c67b-4353-872e-1905f4381a4c","Type":"ContainerDied","Data":"55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559"} Nov 28 00:35:39 crc kubenswrapper[3556]: I1128 00:35:39.893926 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"43d1a23f-c67b-4353-872e-1905f4381a4c","Type":"ContainerStarted","Data":"5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610"} Nov 28 00:35:39 crc kubenswrapper[3556]: I1128 00:35:39.942720 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.942635894 podStartE2EDuration="3.942635894s" podCreationTimestamp="2025-11-28 00:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:35:39.921611986 +0000 UTC m=+1401.513844036" watchObservedRunningTime="2025-11-28 00:35:39.942635894 +0000 UTC m=+1401.534867944" Nov 28 00:35:46 crc kubenswrapper[3556]: I1128 00:35:46.922050 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 28 00:35:46 crc kubenswrapper[3556]: I1128 00:35:46.922940 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="43d1a23f-c67b-4353-872e-1905f4381a4c" containerName="docker-build" containerID="cri-o://5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610" gracePeriod=30 Nov 28 00:35:47 crc kubenswrapper[3556]: E1128 00:35:47.052813 3556 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43d1a23f_c67b_4353_872e_1905f4381a4c.slice/crio-5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43d1a23f_c67b_4353_872e_1905f4381a4c.slice/crio-conmon-5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610.scope\": RecentStats: unable to find data in memory cache]" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.236388 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_43d1a23f-c67b-4353-872e-1905f4381a4c/docker-build/0.log" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.236772 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246371 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-ca-bundles\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246410 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-buildworkdir\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246448 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-proxy-ca-bundles\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246474 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-push\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246506 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-root\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246539 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qb22\" (UniqueName: \"kubernetes.io/projected/43d1a23f-c67b-4353-872e-1905f4381a4c-kube-api-access-6qb22\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246563 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-build-blob-cache\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246598 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-node-pullsecrets\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246622 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-pull\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246648 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-system-configs\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246671 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-buildcachedir\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.246693 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-run\") pod \"43d1a23f-c67b-4353-872e-1905f4381a4c\" (UID: \"43d1a23f-c67b-4353-872e-1905f4381a4c\") " Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.247944 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.248747 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.248848 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.249307 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.249306 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.249949 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.250336 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.260377 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.261360 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.262054 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d1a23f-c67b-4353-872e-1905f4381a4c-kube-api-access-6qb22" (OuterVolumeSpecName: "kube-api-access-6qb22") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "kube-api-access-6qb22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.305541 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348038 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348091 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6qb22\" (UniqueName: \"kubernetes.io/projected/43d1a23f-c67b-4353-872e-1905f4381a4c-kube-api-access-6qb22\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348112 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348133 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348153 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348173 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/43d1a23f-c67b-4353-872e-1905f4381a4c-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348193 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348211 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348229 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348248 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43d1a23f-c67b-4353-872e-1905f4381a4c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.348270 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/43d1a23f-c67b-4353-872e-1905f4381a4c-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.567751 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "43d1a23f-c67b-4353-872e-1905f4381a4c" (UID: "43d1a23f-c67b-4353-872e-1905f4381a4c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.652566 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/43d1a23f-c67b-4353-872e-1905f4381a4c-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.983115 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_43d1a23f-c67b-4353-872e-1905f4381a4c/docker-build/0.log" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.984577 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"43d1a23f-c67b-4353-872e-1905f4381a4c","Type":"ContainerDied","Data":"5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610"} Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.984655 3556 scope.go:117] "RemoveContainer" containerID="5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.984858 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.984946 3556 generic.go:334] "Generic (PLEG): container finished" podID="43d1a23f-c67b-4353-872e-1905f4381a4c" containerID="5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610" exitCode=1 Nov 28 00:35:47 crc kubenswrapper[3556]: I1128 00:35:47.986095 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"43d1a23f-c67b-4353-872e-1905f4381a4c","Type":"ContainerDied","Data":"4c9c0e04724b2145f1922b5b19e53dce9084d1f72db290c88566d8179c37309d"} Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.047764 3556 scope.go:117] "RemoveContainer" containerID="55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.075467 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.080666 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.084931 3556 scope.go:117] "RemoveContainer" containerID="5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610" Nov 28 00:35:48 crc kubenswrapper[3556]: E1128 00:35:48.085780 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610\": container with ID starting with 5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610 not found: ID does not exist" containerID="5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.085881 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610"} err="failed to get container status \"5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610\": rpc error: code = NotFound desc = could not find container \"5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610\": container with ID starting with 5e8217906773399ca5fe9ab48f0da8c3e85b508bdb2260dea04ffc264c132610 not found: ID does not exist" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.085908 3556 scope.go:117] "RemoveContainer" containerID="55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559" Nov 28 00:35:48 crc kubenswrapper[3556]: E1128 00:35:48.086617 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559\": container with ID starting with 55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559 not found: ID does not exist" containerID="55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.086678 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559"} err="failed to get container status \"55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559\": rpc error: code = NotFound desc = could not find container \"55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559\": container with ID starting with 55985812d43bfa7de670f55d7e59b3cc6ecf7cbf3def054bd45ec4abbd197559 not found: ID does not exist" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.665832 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.665959 3556 topology_manager.go:215] "Topology Admit Handler" podUID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" podNamespace="service-telemetry" podName="prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: E1128 00:35:48.666173 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="43d1a23f-c67b-4353-872e-1905f4381a4c" containerName="manage-dockerfile" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.666187 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d1a23f-c67b-4353-872e-1905f4381a4c" containerName="manage-dockerfile" Nov 28 00:35:48 crc kubenswrapper[3556]: E1128 00:35:48.666204 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="43d1a23f-c67b-4353-872e-1905f4381a4c" containerName="docker-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.666215 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d1a23f-c67b-4353-872e-1905f4381a4c" containerName="docker-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.666358 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d1a23f-c67b-4353-872e-1905f4381a4c" containerName="docker-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.667271 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.670474 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.670546 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.674490 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.674896 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ps7tk" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.690759 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782280 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782328 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782368 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782478 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782528 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782567 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782635 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782694 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzh5t\" (UniqueName: \"kubernetes.io/projected/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-kube-api-access-zzh5t\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782727 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782752 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782777 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.782833 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883179 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883234 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883263 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883291 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883325 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883338 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883363 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883402 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883435 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-zzh5t\" (UniqueName: \"kubernetes.io/projected/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-kube-api-access-zzh5t\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883472 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883518 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883544 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883598 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.883594 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.884076 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.884339 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.884362 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.884594 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.884786 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.885204 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.885545 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.896370 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.899985 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.917831 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzh5t\" (UniqueName: \"kubernetes.io/projected/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-kube-api-access-zzh5t\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.940553 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d1a23f-c67b-4353-872e-1905f4381a4c" path="/var/lib/kubelet/pods/43d1a23f-c67b-4353-872e-1905f4381a4c/volumes" Nov 28 00:35:48 crc kubenswrapper[3556]: I1128 00:35:48.981768 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:35:49 crc kubenswrapper[3556]: I1128 00:35:49.177793 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Nov 28 00:35:50 crc kubenswrapper[3556]: I1128 00:35:50.002144 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cb9a1433-48f2-49b6-a216-02bcd62dc7ca","Type":"ContainerStarted","Data":"60421265efcf3b7720002ae8875d55372c995f291e529aaa4f957e0e28c4ac83"} Nov 28 00:35:50 crc kubenswrapper[3556]: I1128 00:35:50.002445 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cb9a1433-48f2-49b6-a216-02bcd62dc7ca","Type":"ContainerStarted","Data":"3b4e25542e02d9711ec3b2403d025e325d33b126a061e8e758fef3039e98d3aa"} Nov 28 00:35:51 crc kubenswrapper[3556]: I1128 00:35:51.011110 3556 generic.go:334] "Generic (PLEG): container finished" podID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerID="60421265efcf3b7720002ae8875d55372c995f291e529aaa4f957e0e28c4ac83" exitCode=0 Nov 28 00:35:51 crc kubenswrapper[3556]: I1128 00:35:51.011226 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cb9a1433-48f2-49b6-a216-02bcd62dc7ca","Type":"ContainerDied","Data":"60421265efcf3b7720002ae8875d55372c995f291e529aaa4f957e0e28c4ac83"} Nov 28 00:35:52 crc kubenswrapper[3556]: I1128 00:35:52.022310 3556 generic.go:334] "Generic (PLEG): container finished" podID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerID="6c915e2697eb03806301a7e3834c4e9fb965ae4cd3e67037108f3529e7fcbec9" exitCode=0 Nov 28 00:35:52 crc kubenswrapper[3556]: I1128 00:35:52.022356 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cb9a1433-48f2-49b6-a216-02bcd62dc7ca","Type":"ContainerDied","Data":"6c915e2697eb03806301a7e3834c4e9fb965ae4cd3e67037108f3529e7fcbec9"} Nov 28 00:35:52 crc kubenswrapper[3556]: I1128 00:35:52.103099 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_cb9a1433-48f2-49b6-a216-02bcd62dc7ca/manage-dockerfile/0.log" Nov 28 00:35:52 crc kubenswrapper[3556]: I1128 00:35:52.664660 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:35:52 crc kubenswrapper[3556]: I1128 00:35:52.664772 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:35:53 crc kubenswrapper[3556]: I1128 00:35:53.034350 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cb9a1433-48f2-49b6-a216-02bcd62dc7ca","Type":"ContainerStarted","Data":"231750a1625fdaaec0f1a0714ecc2a6f9ef737cddef90764b89794c87ef064fb"} Nov 28 00:35:53 crc kubenswrapper[3556]: I1128 00:35:53.087920 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.087844417 podStartE2EDuration="5.087844417s" podCreationTimestamp="2025-11-28 00:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:35:53.083688285 +0000 UTC m=+1414.675920335" watchObservedRunningTime="2025-11-28 00:35:53.087844417 +0000 UTC m=+1414.680076447" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.270192 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gjvbq"] Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.270840 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" podNamespace="openshift-marketplace" podName="certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.272115 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.282697 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gjvbq"] Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.380128 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-catalog-content\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.380200 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-utilities\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.380297 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxw9\" (UniqueName: \"kubernetes.io/projected/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-kube-api-access-4fxw9\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.481528 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-catalog-content\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.481588 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-utilities\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.481635 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxw9\" (UniqueName: \"kubernetes.io/projected/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-kube-api-access-4fxw9\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.482514 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-catalog-content\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.482742 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-utilities\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.500133 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxw9\" (UniqueName: \"kubernetes.io/projected/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-kube-api-access-4fxw9\") pod \"certified-operators-gjvbq\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.590698 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.717542 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.717854 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.717879 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.717909 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.717936 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:36:18 crc kubenswrapper[3556]: I1128 00:36:18.819430 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gjvbq"] Nov 28 00:36:19 crc kubenswrapper[3556]: I1128 00:36:19.177913 3556 generic.go:334] "Generic (PLEG): container finished" podID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerID="f338e5539203c72062d14f46fb2108736784b72ecbbcbbb03b5951302ac03c4a" exitCode=0 Nov 28 00:36:19 crc kubenswrapper[3556]: I1128 00:36:19.177952 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjvbq" event={"ID":"ecbf6302-b2e7-4f9d-9794-f49deea48d1e","Type":"ContainerDied","Data":"f338e5539203c72062d14f46fb2108736784b72ecbbcbbb03b5951302ac03c4a"} Nov 28 00:36:19 crc kubenswrapper[3556]: I1128 00:36:19.177976 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjvbq" event={"ID":"ecbf6302-b2e7-4f9d-9794-f49deea48d1e","Type":"ContainerStarted","Data":"3ab1df6ec5dd224477dba877d70cac59bd0dfe6c48af8cfb796ca2257575e0da"} Nov 28 00:36:19 crc kubenswrapper[3556]: I1128 00:36:19.179792 3556 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 00:36:20 crc kubenswrapper[3556]: I1128 00:36:20.185210 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjvbq" event={"ID":"ecbf6302-b2e7-4f9d-9794-f49deea48d1e","Type":"ContainerStarted","Data":"3b24278a42a00666159053c78ba263ef6ead0a334230c14c39bf34f86f692870"} Nov 28 00:36:21 crc kubenswrapper[3556]: I1128 00:36:21.191831 3556 generic.go:334] "Generic (PLEG): container finished" podID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerID="3b24278a42a00666159053c78ba263ef6ead0a334230c14c39bf34f86f692870" exitCode=0 Nov 28 00:36:21 crc kubenswrapper[3556]: I1128 00:36:21.191943 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjvbq" event={"ID":"ecbf6302-b2e7-4f9d-9794-f49deea48d1e","Type":"ContainerDied","Data":"3b24278a42a00666159053c78ba263ef6ead0a334230c14c39bf34f86f692870"} Nov 28 00:36:22 crc kubenswrapper[3556]: I1128 00:36:22.199705 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjvbq" event={"ID":"ecbf6302-b2e7-4f9d-9794-f49deea48d1e","Type":"ContainerStarted","Data":"5b704432bd3bdaa788b7fe352cb61967b3684c048a4452b2c67c47bc1e036a0c"} Nov 28 00:36:22 crc kubenswrapper[3556]: I1128 00:36:22.225613 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gjvbq" podStartSLOduration=1.8445707040000001 podStartE2EDuration="4.225565731s" podCreationTimestamp="2025-11-28 00:36:18 +0000 UTC" firstStartedPulling="2025-11-28 00:36:19.179597624 +0000 UTC m=+1440.771829614" lastFinishedPulling="2025-11-28 00:36:21.560592651 +0000 UTC m=+1443.152824641" observedRunningTime="2025-11-28 00:36:22.223708596 +0000 UTC m=+1443.815940586" watchObservedRunningTime="2025-11-28 00:36:22.225565731 +0000 UTC m=+1443.817797731" Nov 28 00:36:22 crc kubenswrapper[3556]: I1128 00:36:22.663768 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:36:22 crc kubenswrapper[3556]: I1128 00:36:22.664183 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:36:22 crc kubenswrapper[3556]: I1128 00:36:22.664331 3556 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:36:22 crc kubenswrapper[3556]: I1128 00:36:22.665085 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77ce0f6a03e4a0ff03abcc42291734e51c9965a62271e2d0ca1f6177a9180a17"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 00:36:22 crc kubenswrapper[3556]: I1128 00:36:22.665392 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://77ce0f6a03e4a0ff03abcc42291734e51c9965a62271e2d0ca1f6177a9180a17" gracePeriod=600 Nov 28 00:36:23 crc kubenswrapper[3556]: I1128 00:36:23.209512 3556 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="77ce0f6a03e4a0ff03abcc42291734e51c9965a62271e2d0ca1f6177a9180a17" exitCode=0 Nov 28 00:36:23 crc kubenswrapper[3556]: I1128 00:36:23.209591 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"77ce0f6a03e4a0ff03abcc42291734e51c9965a62271e2d0ca1f6177a9180a17"} Nov 28 00:36:23 crc kubenswrapper[3556]: I1128 00:36:23.210219 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2"} Nov 28 00:36:23 crc kubenswrapper[3556]: I1128 00:36:23.210252 3556 scope.go:117] "RemoveContainer" containerID="9cf992a274a0e70310dc3d7d1301a0c527636124f65ae98d66c11396ccb07234" Nov 28 00:36:28 crc kubenswrapper[3556]: I1128 00:36:28.591383 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:28 crc kubenswrapper[3556]: I1128 00:36:28.591908 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:28 crc kubenswrapper[3556]: I1128 00:36:28.689364 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:29 crc kubenswrapper[3556]: I1128 00:36:29.347749 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:29 crc kubenswrapper[3556]: I1128 00:36:29.394339 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gjvbq"] Nov 28 00:36:31 crc kubenswrapper[3556]: I1128 00:36:31.248369 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gjvbq" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerName="registry-server" containerID="cri-o://5b704432bd3bdaa788b7fe352cb61967b3684c048a4452b2c67c47bc1e036a0c" gracePeriod=2 Nov 28 00:36:36 crc kubenswrapper[3556]: I1128 00:36:36.904714 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.017173 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fxw9\" (UniqueName: \"kubernetes.io/projected/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-kube-api-access-4fxw9\") pod \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.017501 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-utilities\") pod \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.017671 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-catalog-content\") pod \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\" (UID: \"ecbf6302-b2e7-4f9d-9794-f49deea48d1e\") " Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.018433 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-utilities" (OuterVolumeSpecName: "utilities") pod "ecbf6302-b2e7-4f9d-9794-f49deea48d1e" (UID: "ecbf6302-b2e7-4f9d-9794-f49deea48d1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.025147 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-kube-api-access-4fxw9" (OuterVolumeSpecName: "kube-api-access-4fxw9") pod "ecbf6302-b2e7-4f9d-9794-f49deea48d1e" (UID: "ecbf6302-b2e7-4f9d-9794-f49deea48d1e"). InnerVolumeSpecName "kube-api-access-4fxw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.120500 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4fxw9\" (UniqueName: \"kubernetes.io/projected/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-kube-api-access-4fxw9\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.120540 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.150673 3556 generic.go:334] "Generic (PLEG): container finished" podID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerID="5b704432bd3bdaa788b7fe352cb61967b3684c048a4452b2c67c47bc1e036a0c" exitCode=0 Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.150725 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjvbq" event={"ID":"ecbf6302-b2e7-4f9d-9794-f49deea48d1e","Type":"ContainerDied","Data":"5b704432bd3bdaa788b7fe352cb61967b3684c048a4452b2c67c47bc1e036a0c"} Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.150757 3556 scope.go:117] "RemoveContainer" containerID="5b704432bd3bdaa788b7fe352cb61967b3684c048a4452b2c67c47bc1e036a0c" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.173990 3556 scope.go:117] "RemoveContainer" containerID="3b24278a42a00666159053c78ba263ef6ead0a334230c14c39bf34f86f692870" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.204076 3556 scope.go:117] "RemoveContainer" containerID="f338e5539203c72062d14f46fb2108736784b72ecbbcbbb03b5951302ac03c4a" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.278754 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecbf6302-b2e7-4f9d-9794-f49deea48d1e" (UID: "ecbf6302-b2e7-4f9d-9794-f49deea48d1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:36:37 crc kubenswrapper[3556]: I1128 00:36:37.323401 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecbf6302-b2e7-4f9d-9794-f49deea48d1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:38 crc kubenswrapper[3556]: I1128 00:36:38.156911 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjvbq" event={"ID":"ecbf6302-b2e7-4f9d-9794-f49deea48d1e","Type":"ContainerDied","Data":"3ab1df6ec5dd224477dba877d70cac59bd0dfe6c48af8cfb796ca2257575e0da"} Nov 28 00:36:38 crc kubenswrapper[3556]: I1128 00:36:38.156958 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjvbq" Nov 28 00:36:38 crc kubenswrapper[3556]: I1128 00:36:38.189075 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gjvbq"] Nov 28 00:36:38 crc kubenswrapper[3556]: I1128 00:36:38.196444 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gjvbq"] Nov 28 00:36:38 crc kubenswrapper[3556]: I1128 00:36:38.921007 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" path="/var/lib/kubelet/pods/ecbf6302-b2e7-4f9d-9794-f49deea48d1e/volumes" Nov 28 00:36:52 crc kubenswrapper[3556]: I1128 00:36:52.231335 3556 generic.go:334] "Generic (PLEG): container finished" podID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerID="231750a1625fdaaec0f1a0714ecc2a6f9ef737cddef90764b89794c87ef064fb" exitCode=0 Nov 28 00:36:52 crc kubenswrapper[3556]: I1128 00:36:52.232186 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cb9a1433-48f2-49b6-a216-02bcd62dc7ca","Type":"ContainerDied","Data":"231750a1625fdaaec0f1a0714ecc2a6f9ef737cddef90764b89794c87ef064fb"} Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.478871 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.627448 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzh5t\" (UniqueName: \"kubernetes.io/projected/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-kube-api-access-zzh5t\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.628194 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-node-pullsecrets\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.628423 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-pull\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.628581 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildcachedir\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.628742 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-root\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.628353 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.628640 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.634298 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildworkdir\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.634467 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-ca-bundles\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.634535 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-system-configs\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.634592 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-blob-cache\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.634637 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-run\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.634707 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-push\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.634759 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-proxy-ca-bundles\") pod \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\" (UID: \"cb9a1433-48f2-49b6-a216-02bcd62dc7ca\") " Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.635039 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.635230 3556 reconciler_common.go:300] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.635249 3556 reconciler_common.go:300] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildcachedir\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.635259 3556 reconciler_common.go:300] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.635653 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.636087 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.636121 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.636324 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.637296 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-kube-api-access-zzh5t" (OuterVolumeSpecName: "kube-api-access-zzh5t") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "kube-api-access-zzh5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.637303 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-pull" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-pull") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "builder-dockercfg-ps7tk-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.641134 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-push" (OuterVolumeSpecName: "builder-dockercfg-ps7tk-push") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "builder-dockercfg-ps7tk-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.736710 3556 reconciler_common.go:300] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-system-configs\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.736743 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-run\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.736756 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-push\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-push\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.736767 3556 reconciler_common.go:300] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.736794 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zzh5t\" (UniqueName: \"kubernetes.io/projected/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-kube-api-access-zzh5t\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.736807 3556 reconciler_common.go:300] "Volume detached for volume \"builder-dockercfg-ps7tk-pull\" (UniqueName: \"kubernetes.io/secret/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-builder-dockercfg-ps7tk-pull\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.736818 3556 reconciler_common.go:300] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-buildworkdir\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.743794 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:36:53 crc kubenswrapper[3556]: I1128 00:36:53.838049 3556 reconciler_common.go:300] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-build-blob-cache\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:54 crc kubenswrapper[3556]: I1128 00:36:54.243438 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cb9a1433-48f2-49b6-a216-02bcd62dc7ca","Type":"ContainerDied","Data":"3b4e25542e02d9711ec3b2403d025e325d33b126a061e8e758fef3039e98d3aa"} Nov 28 00:36:54 crc kubenswrapper[3556]: I1128 00:36:54.243467 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b4e25542e02d9711ec3b2403d025e325d33b126a061e8e758fef3039e98d3aa" Nov 28 00:36:54 crc kubenswrapper[3556]: I1128 00:36:54.243517 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Nov 28 00:36:54 crc kubenswrapper[3556]: I1128 00:36:54.444385 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cb9a1433-48f2-49b6-a216-02bcd62dc7ca" (UID: "cb9a1433-48f2-49b6-a216-02bcd62dc7ca"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:36:54 crc kubenswrapper[3556]: I1128 00:36:54.444862 3556 reconciler_common.go:300] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb9a1433-48f2-49b6-a216-02bcd62dc7ca-container-storage-root\") on node \"crc\" DevicePath \"\"" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.172826 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-b4d9f888-97cvc"] Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173482 3556 topology_manager.go:215] "Topology Admit Handler" podUID="8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd" podNamespace="service-telemetry" podName="smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: E1128 00:36:59.173633 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerName="docker-build" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173646 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerName="docker-build" Nov 28 00:36:59 crc kubenswrapper[3556]: E1128 00:36:59.173663 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerName="registry-server" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173673 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerName="registry-server" Nov 28 00:36:59 crc kubenswrapper[3556]: E1128 00:36:59.173689 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerName="manage-dockerfile" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173698 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerName="manage-dockerfile" Nov 28 00:36:59 crc kubenswrapper[3556]: E1128 00:36:59.173712 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerName="git-clone" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173721 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerName="git-clone" Nov 28 00:36:59 crc kubenswrapper[3556]: E1128 00:36:59.173736 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerName="extract-utilities" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173745 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerName="extract-utilities" Nov 28 00:36:59 crc kubenswrapper[3556]: E1128 00:36:59.173759 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerName="extract-content" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173767 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerName="extract-content" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173892 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbf6302-b2e7-4f9d-9794-f49deea48d1e" containerName="registry-server" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.173917 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9a1433-48f2-49b6-a216-02bcd62dc7ca" containerName="docker-build" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.174404 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.176793 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-8nzrs" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.189133 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-b4d9f888-97cvc"] Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.206853 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg8ct\" (UniqueName: \"kubernetes.io/projected/8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd-kube-api-access-lg8ct\") pod \"smart-gateway-operator-b4d9f888-97cvc\" (UID: \"8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd\") " pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.206921 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd-runner\") pod \"smart-gateway-operator-b4d9f888-97cvc\" (UID: \"8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd\") " pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.308067 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-lg8ct\" (UniqueName: \"kubernetes.io/projected/8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd-kube-api-access-lg8ct\") pod \"smart-gateway-operator-b4d9f888-97cvc\" (UID: \"8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd\") " pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.308377 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd-runner\") pod \"smart-gateway-operator-b4d9f888-97cvc\" (UID: \"8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd\") " pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.308851 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd-runner\") pod \"smart-gateway-operator-b4d9f888-97cvc\" (UID: \"8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd\") " pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.344998 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg8ct\" (UniqueName: \"kubernetes.io/projected/8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd-kube-api-access-lg8ct\") pod \"smart-gateway-operator-b4d9f888-97cvc\" (UID: \"8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd\") " pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.491343 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" Nov 28 00:36:59 crc kubenswrapper[3556]: I1128 00:36:59.748757 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-b4d9f888-97cvc"] Nov 28 00:36:59 crc kubenswrapper[3556]: W1128 00:36:59.755192 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eacead8_ae3d_4d50_b9b4_4f7c4261fbbd.slice/crio-5fbc815a40eba39ae73ac2fe8f732d472463b37ee27c38e11ead598c3be38784 WatchSource:0}: Error finding container 5fbc815a40eba39ae73ac2fe8f732d472463b37ee27c38e11ead598c3be38784: Status 404 returned error can't find the container with id 5fbc815a40eba39ae73ac2fe8f732d472463b37ee27c38e11ead598c3be38784 Nov 28 00:37:00 crc kubenswrapper[3556]: I1128 00:37:00.275392 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" event={"ID":"8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd","Type":"ContainerStarted","Data":"5fbc815a40eba39ae73ac2fe8f732d472463b37ee27c38e11ead598c3be38784"} Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.532834 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-7f585466fb-ww6s8"] Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.533029 3556 topology_manager.go:215] "Topology Admit Handler" podUID="c852c1b7-7cec-4ae1-a067-0b7bcda673ca" podNamespace="service-telemetry" podName="service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.539378 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.543256 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7f585466fb-ww6s8"] Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.543549 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-ddw5z" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.584248 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c852c1b7-7cec-4ae1-a067-0b7bcda673ca-runner\") pod \"service-telemetry-operator-7f585466fb-ww6s8\" (UID: \"c852c1b7-7cec-4ae1-a067-0b7bcda673ca\") " pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.584340 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z8w2\" (UniqueName: \"kubernetes.io/projected/c852c1b7-7cec-4ae1-a067-0b7bcda673ca-kube-api-access-2z8w2\") pod \"service-telemetry-operator-7f585466fb-ww6s8\" (UID: \"c852c1b7-7cec-4ae1-a067-0b7bcda673ca\") " pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.685646 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c852c1b7-7cec-4ae1-a067-0b7bcda673ca-runner\") pod \"service-telemetry-operator-7f585466fb-ww6s8\" (UID: \"c852c1b7-7cec-4ae1-a067-0b7bcda673ca\") " pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.685741 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-2z8w2\" (UniqueName: \"kubernetes.io/projected/c852c1b7-7cec-4ae1-a067-0b7bcda673ca-kube-api-access-2z8w2\") pod \"service-telemetry-operator-7f585466fb-ww6s8\" (UID: \"c852c1b7-7cec-4ae1-a067-0b7bcda673ca\") " pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.687982 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/c852c1b7-7cec-4ae1-a067-0b7bcda673ca-runner\") pod \"service-telemetry-operator-7f585466fb-ww6s8\" (UID: \"c852c1b7-7cec-4ae1-a067-0b7bcda673ca\") " pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.717724 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z8w2\" (UniqueName: \"kubernetes.io/projected/c852c1b7-7cec-4ae1-a067-0b7bcda673ca-kube-api-access-2z8w2\") pod \"service-telemetry-operator-7f585466fb-ww6s8\" (UID: \"c852c1b7-7cec-4ae1-a067-0b7bcda673ca\") " pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:05 crc kubenswrapper[3556]: I1128 00:37:05.856082 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" Nov 28 00:37:12 crc kubenswrapper[3556]: I1128 00:37:12.693995 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7f585466fb-ww6s8"] Nov 28 00:37:12 crc kubenswrapper[3556]: W1128 00:37:12.707460 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc852c1b7_7cec_4ae1_a067_0b7bcda673ca.slice/crio-967eea9f1624e057dd41f613495294c80a66038d447c9bc2e3e304c8e1f21925 WatchSource:0}: Error finding container 967eea9f1624e057dd41f613495294c80a66038d447c9bc2e3e304c8e1f21925: Status 404 returned error can't find the container with id 967eea9f1624e057dd41f613495294c80a66038d447c9bc2e3e304c8e1f21925 Nov 28 00:37:13 crc kubenswrapper[3556]: I1128 00:37:13.380119 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" event={"ID":"c852c1b7-7cec-4ae1-a067-0b7bcda673ca","Type":"ContainerStarted","Data":"967eea9f1624e057dd41f613495294c80a66038d447c9bc2e3e304c8e1f21925"} Nov 28 00:37:16 crc kubenswrapper[3556]: I1128 00:37:16.402214 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" event={"ID":"8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd","Type":"ContainerStarted","Data":"261bccd219749aa04c14f5da0f1f4c1ab528308ae867a022c746d3c6bc2c0c84"} Nov 28 00:37:16 crc kubenswrapper[3556]: I1128 00:37:16.428232 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-b4d9f888-97cvc" podStartSLOduration=1.7129668420000002 podStartE2EDuration="17.428186599s" podCreationTimestamp="2025-11-28 00:36:59 +0000 UTC" firstStartedPulling="2025-11-28 00:36:59.757732718 +0000 UTC m=+1481.349964718" lastFinishedPulling="2025-11-28 00:37:15.472952485 +0000 UTC m=+1497.065184475" observedRunningTime="2025-11-28 00:37:16.425116783 +0000 UTC m=+1498.017348783" watchObservedRunningTime="2025-11-28 00:37:16.428186599 +0000 UTC m=+1498.020418619" Nov 28 00:37:18 crc kubenswrapper[3556]: I1128 00:37:18.718796 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:37:18 crc kubenswrapper[3556]: I1128 00:37:18.719153 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:37:18 crc kubenswrapper[3556]: I1128 00:37:18.719192 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:37:18 crc kubenswrapper[3556]: I1128 00:37:18.719227 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:37:18 crc kubenswrapper[3556]: I1128 00:37:18.719256 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:37:20 crc kubenswrapper[3556]: I1128 00:37:20.427530 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" event={"ID":"c852c1b7-7cec-4ae1-a067-0b7bcda673ca","Type":"ContainerStarted","Data":"2e9080cde12116af6208275e89c98053549a84e5549f21455b802ed5f134a16e"} Nov 28 00:37:20 crc kubenswrapper[3556]: I1128 00:37:20.448528 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-7f585466fb-ww6s8" podStartSLOduration=8.665408982 podStartE2EDuration="15.448479241s" podCreationTimestamp="2025-11-28 00:37:05 +0000 UTC" firstStartedPulling="2025-11-28 00:37:12.709136462 +0000 UTC m=+1494.301368462" lastFinishedPulling="2025-11-28 00:37:19.492206731 +0000 UTC m=+1501.084438721" observedRunningTime="2025-11-28 00:37:20.446109552 +0000 UTC m=+1502.038341562" watchObservedRunningTime="2025-11-28 00:37:20.448479241 +0000 UTC m=+1502.040711261" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.614126 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2v6lc"] Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.615868 3556 topology_manager.go:215] "Topology Admit Handler" podUID="c797db77-4d3e-477a-a841-21016b7e5788" podNamespace="openshift-marketplace" podName="redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.617355 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.638951 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2v6lc"] Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.739236 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-utilities\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.739299 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-catalog-content\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.739337 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqr9s\" (UniqueName: \"kubernetes.io/projected/c797db77-4d3e-477a-a841-21016b7e5788-kube-api-access-jqr9s\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.840966 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-utilities\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.841076 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-catalog-content\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.841159 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-jqr9s\" (UniqueName: \"kubernetes.io/projected/c797db77-4d3e-477a-a841-21016b7e5788-kube-api-access-jqr9s\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.841807 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-catalog-content\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.841839 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-utilities\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.867726 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqr9s\" (UniqueName: \"kubernetes.io/projected/c797db77-4d3e-477a-a841-21016b7e5788-kube-api-access-jqr9s\") pod \"redhat-operators-2v6lc\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:32 crc kubenswrapper[3556]: I1128 00:37:32.931926 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:37:33 crc kubenswrapper[3556]: I1128 00:37:33.268737 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2v6lc"] Nov 28 00:37:33 crc kubenswrapper[3556]: I1128 00:37:33.493273 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v6lc" event={"ID":"c797db77-4d3e-477a-a841-21016b7e5788","Type":"ContainerStarted","Data":"fef9c09ee2373ac70096e557eaa8d57c3792703d835efe8d5e9dbb27a2dbd5be"} Nov 28 00:37:34 crc kubenswrapper[3556]: I1128 00:37:34.500108 3556 generic.go:334] "Generic (PLEG): container finished" podID="c797db77-4d3e-477a-a841-21016b7e5788" containerID="fd113ff2dcabec3548f10ff5e642dfd261134d095f22f25557ae81aae1d0057c" exitCode=0 Nov 28 00:37:34 crc kubenswrapper[3556]: I1128 00:37:34.500172 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v6lc" event={"ID":"c797db77-4d3e-477a-a841-21016b7e5788","Type":"ContainerDied","Data":"fd113ff2dcabec3548f10ff5e642dfd261134d095f22f25557ae81aae1d0057c"} Nov 28 00:37:35 crc kubenswrapper[3556]: I1128 00:37:35.519209 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v6lc" event={"ID":"c797db77-4d3e-477a-a841-21016b7e5788","Type":"ContainerStarted","Data":"1a805edf53e5bc94ea8f5c047cfeb9432c907f93eb58c9a2f153555a07f70818"} Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.700147 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-zx6wj"] Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.700802 3556 topology_manager.go:215] "Topology Admit Handler" podUID="38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" podNamespace="service-telemetry" podName="default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.701764 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.706460 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.706768 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.706883 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.706973 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-dvkk5" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.706990 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.707055 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.707157 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.732659 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-zx6wj"] Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.795417 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.795687 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.795745 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-config\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.795847 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2tz6\" (UniqueName: \"kubernetes.io/projected/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-kube-api-access-t2tz6\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.796006 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.796114 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-users\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.796173 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.896613 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-config\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.896670 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-t2tz6\" (UniqueName: \"kubernetes.io/projected/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-kube-api-access-t2tz6\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.896699 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.896731 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-users\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.896764 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.896789 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.896806 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.898610 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-config\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.902815 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.903198 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.905682 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.911886 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.913739 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2tz6\" (UniqueName: \"kubernetes.io/projected/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-kube-api-access-t2tz6\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:51 crc kubenswrapper[3556]: I1128 00:37:51.917625 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-users\") pod \"default-interconnect-84dbc59cb8-zx6wj\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:52 crc kubenswrapper[3556]: I1128 00:37:52.017523 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:37:52 crc kubenswrapper[3556]: I1128 00:37:52.284220 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-zx6wj"] Nov 28 00:37:52 crc kubenswrapper[3556]: W1128 00:37:52.307219 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a1a8aa_ff87_4138_bee6_376ab9e7c2d8.slice/crio-320cad16e35e0e4bb064424a5eed46e192d396d253fcc65d8a3fa96c259a2a44 WatchSource:0}: Error finding container 320cad16e35e0e4bb064424a5eed46e192d396d253fcc65d8a3fa96c259a2a44: Status 404 returned error can't find the container with id 320cad16e35e0e4bb064424a5eed46e192d396d253fcc65d8a3fa96c259a2a44 Nov 28 00:37:52 crc kubenswrapper[3556]: I1128 00:37:52.604066 3556 generic.go:334] "Generic (PLEG): container finished" podID="c797db77-4d3e-477a-a841-21016b7e5788" containerID="1a805edf53e5bc94ea8f5c047cfeb9432c907f93eb58c9a2f153555a07f70818" exitCode=0 Nov 28 00:37:52 crc kubenswrapper[3556]: I1128 00:37:52.604120 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v6lc" event={"ID":"c797db77-4d3e-477a-a841-21016b7e5788","Type":"ContainerDied","Data":"1a805edf53e5bc94ea8f5c047cfeb9432c907f93eb58c9a2f153555a07f70818"} Nov 28 00:37:52 crc kubenswrapper[3556]: I1128 00:37:52.605095 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" event={"ID":"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8","Type":"ContainerStarted","Data":"320cad16e35e0e4bb064424a5eed46e192d396d253fcc65d8a3fa96c259a2a44"} Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.469368 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptwjg"] Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.469817 3556 topology_manager.go:215] "Topology Admit Handler" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" podNamespace="openshift-marketplace" podName="community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.471177 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.485316 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptwjg"] Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.516726 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-utilities\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.516770 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz7bd\" (UniqueName: \"kubernetes.io/projected/134d4fdf-1364-4f42-9c82-3c85c59217ac-kube-api-access-hz7bd\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.516975 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-catalog-content\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.621512 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-utilities\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.621576 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-hz7bd\" (UniqueName: \"kubernetes.io/projected/134d4fdf-1364-4f42-9c82-3c85c59217ac-kube-api-access-hz7bd\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.621647 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-catalog-content\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.622183 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-catalog-content\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.624615 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-utilities\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.644922 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz7bd\" (UniqueName: \"kubernetes.io/projected/134d4fdf-1364-4f42-9c82-3c85c59217ac-kube-api-access-hz7bd\") pod \"community-operators-ptwjg\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:53 crc kubenswrapper[3556]: I1128 00:37:53.787901 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:37:54 crc kubenswrapper[3556]: I1128 00:37:54.089660 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptwjg"] Nov 28 00:37:54 crc kubenswrapper[3556]: I1128 00:37:54.626074 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwjg" event={"ID":"134d4fdf-1364-4f42-9c82-3c85c59217ac","Type":"ContainerStarted","Data":"8fc362cc3df9a250ed1a61863317a0e01b98f5f6fcdb9c6bbfb25c957351b5f7"} Nov 28 00:37:56 crc kubenswrapper[3556]: I1128 00:37:56.659485 3556 generic.go:334] "Generic (PLEG): container finished" podID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerID="58a1b8445ccbd044f8f0b54ce9720dfaa8fdd13da665fe9c1b9800f0cf20bc97" exitCode=0 Nov 28 00:37:56 crc kubenswrapper[3556]: I1128 00:37:56.659761 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwjg" event={"ID":"134d4fdf-1364-4f42-9c82-3c85c59217ac","Type":"ContainerDied","Data":"58a1b8445ccbd044f8f0b54ce9720dfaa8fdd13da665fe9c1b9800f0cf20bc97"} Nov 28 00:37:56 crc kubenswrapper[3556]: I1128 00:37:56.662771 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v6lc" event={"ID":"c797db77-4d3e-477a-a841-21016b7e5788","Type":"ContainerStarted","Data":"5d7fef55f1d4ca7081e847f383c8ff2970bd02bfb632db82f62732a83597a63f"} Nov 28 00:37:56 crc kubenswrapper[3556]: I1128 00:37:56.703476 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2v6lc" podStartSLOduration=6.229391982 podStartE2EDuration="24.703421817s" podCreationTimestamp="2025-11-28 00:37:32 +0000 UTC" firstStartedPulling="2025-11-28 00:37:34.502262226 +0000 UTC m=+1516.094494216" lastFinishedPulling="2025-11-28 00:37:52.976292051 +0000 UTC m=+1534.568524051" observedRunningTime="2025-11-28 00:37:56.697418859 +0000 UTC m=+1538.289650849" watchObservedRunningTime="2025-11-28 00:37:56.703421817 +0000 UTC m=+1538.295653827" Nov 28 00:38:02 crc kubenswrapper[3556]: I1128 00:38:02.691480 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwjg" event={"ID":"134d4fdf-1364-4f42-9c82-3c85c59217ac","Type":"ContainerStarted","Data":"732dbda516ae8ca038b951095b7e7d998e352086ac98b714284502df55f2f702"} Nov 28 00:38:02 crc kubenswrapper[3556]: I1128 00:38:02.692514 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" event={"ID":"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8","Type":"ContainerStarted","Data":"475dcb8abb6d7d641d65f2078458a3e7bb4658b68151ec51de543baf01e84c88"} Nov 28 00:38:02 crc kubenswrapper[3556]: I1128 00:38:02.724804 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" podStartSLOduration=2.405875399 podStartE2EDuration="11.7247579s" podCreationTimestamp="2025-11-28 00:37:51 +0000 UTC" firstStartedPulling="2025-11-28 00:37:52.309544947 +0000 UTC m=+1533.901776937" lastFinishedPulling="2025-11-28 00:38:01.628427448 +0000 UTC m=+1543.220659438" observedRunningTime="2025-11-28 00:38:02.723734555 +0000 UTC m=+1544.315966555" watchObservedRunningTime="2025-11-28 00:38:02.7247579 +0000 UTC m=+1544.316989890" Nov 28 00:38:02 crc kubenswrapper[3556]: I1128 00:38:02.932079 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:38:02 crc kubenswrapper[3556]: I1128 00:38:02.932133 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:38:03 crc kubenswrapper[3556]: I1128 00:38:03.031111 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:38:03 crc kubenswrapper[3556]: I1128 00:38:03.815386 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:38:03 crc kubenswrapper[3556]: I1128 00:38:03.861974 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2v6lc"] Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.262228 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.262376 3556 topology_manager.go:215] "Topology Admit Handler" podUID="0c7c2afb-f325-4137-96a0-e217c2240fb1" podNamespace="service-telemetry" podName="prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.264254 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.268432 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.268505 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.268725 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.268782 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.269207 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-hpr2j" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.270650 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.270661 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.273164 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.275110 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372669 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8a9705e2-7283-43b9-8886-dacd47d06e7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a9705e2-7283-43b9-8886-dacd47d06e7d\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372722 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-web-config\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372757 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c7c2afb-f325-4137-96a0-e217c2240fb1-config-out\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372780 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372806 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c7c2afb-f325-4137-96a0-e217c2240fb1-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372833 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hqm\" (UniqueName: \"kubernetes.io/projected/0c7c2afb-f325-4137-96a0-e217c2240fb1-kube-api-access-w2hqm\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372885 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c7c2afb-f325-4137-96a0-e217c2240fb1-tls-assets\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372933 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7c2afb-f325-4137-96a0-e217c2240fb1-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372971 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.372995 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-config\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474038 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c7c2afb-f325-4137-96a0-e217c2240fb1-config-out\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474082 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474102 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c7c2afb-f325-4137-96a0-e217c2240fb1-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474125 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-w2hqm\" (UniqueName: \"kubernetes.io/projected/0c7c2afb-f325-4137-96a0-e217c2240fb1-kube-api-access-w2hqm\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474154 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c7c2afb-f325-4137-96a0-e217c2240fb1-tls-assets\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474185 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7c2afb-f325-4137-96a0-e217c2240fb1-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474221 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474245 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-config\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: E1128 00:38:04.474253 3556 secret.go:194] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Nov 28 00:38:04 crc kubenswrapper[3556]: E1128 00:38:04.474329 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls podName:0c7c2afb-f325-4137-96a0-e217c2240fb1 nodeName:}" failed. No retries permitted until 2025-11-28 00:38:04.974310582 +0000 UTC m=+1546.566542652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "0c7c2afb-f325-4137-96a0-e217c2240fb1") : secret "default-prometheus-proxy-tls" not found Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474684 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-8a9705e2-7283-43b9-8886-dacd47d06e7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a9705e2-7283-43b9-8886-dacd47d06e7d\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.474724 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-web-config\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.475321 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7c2afb-f325-4137-96a0-e217c2240fb1-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.475329 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0c7c2afb-f325-4137-96a0-e217c2240fb1-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.479644 3556 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.479687 3556 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-8a9705e2-7283-43b9-8886-dacd47d06e7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a9705e2-7283-43b9-8886-dacd47d06e7d\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94b4a2efc488e4dc7ee16701148205b909b4a170cc8c994dae7c8161d708ba06/globalmount\"" pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.480218 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c7c2afb-f325-4137-96a0-e217c2240fb1-tls-assets\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.480240 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-web-config\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.480374 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.483114 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-config\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.495286 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c7c2afb-f325-4137-96a0-e217c2240fb1-config-out\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.496778 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2hqm\" (UniqueName: \"kubernetes.io/projected/0c7c2afb-f325-4137-96a0-e217c2240fb1-kube-api-access-w2hqm\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.514868 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-8a9705e2-7283-43b9-8886-dacd47d06e7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8a9705e2-7283-43b9-8886-dacd47d06e7d\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: I1128 00:38:04.983494 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:04 crc kubenswrapper[3556]: E1128 00:38:04.983802 3556 secret.go:194] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Nov 28 00:38:04 crc kubenswrapper[3556]: E1128 00:38:04.983997 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls podName:0c7c2afb-f325-4137-96a0-e217c2240fb1 nodeName:}" failed. No retries permitted until 2025-11-28 00:38:05.983976375 +0000 UTC m=+1547.576208365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "0c7c2afb-f325-4137-96a0-e217c2240fb1") : secret "default-prometheus-proxy-tls" not found Nov 28 00:38:05 crc kubenswrapper[3556]: I1128 00:38:05.707989 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2v6lc" podUID="c797db77-4d3e-477a-a841-21016b7e5788" containerName="registry-server" containerID="cri-o://5d7fef55f1d4ca7081e847f383c8ff2970bd02bfb632db82f62732a83597a63f" gracePeriod=2 Nov 28 00:38:05 crc kubenswrapper[3556]: I1128 00:38:05.997241 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:06 crc kubenswrapper[3556]: I1128 00:38:06.003411 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c7c2afb-f325-4137-96a0-e217c2240fb1-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"0c7c2afb-f325-4137-96a0-e217c2240fb1\") " pod="service-telemetry/prometheus-default-0" Nov 28 00:38:06 crc kubenswrapper[3556]: I1128 00:38:06.080412 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Nov 28 00:38:06 crc kubenswrapper[3556]: I1128 00:38:06.295686 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Nov 28 00:38:06 crc kubenswrapper[3556]: W1128 00:38:06.301170 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c7c2afb_f325_4137_96a0_e217c2240fb1.slice/crio-18d338e7083458a7ab2d6054e8a8ceea842a9d59d940e83c93ca1fa1d3812445 WatchSource:0}: Error finding container 18d338e7083458a7ab2d6054e8a8ceea842a9d59d940e83c93ca1fa1d3812445: Status 404 returned error can't find the container with id 18d338e7083458a7ab2d6054e8a8ceea842a9d59d940e83c93ca1fa1d3812445 Nov 28 00:38:06 crc kubenswrapper[3556]: I1128 00:38:06.714363 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0c7c2afb-f325-4137-96a0-e217c2240fb1","Type":"ContainerStarted","Data":"18d338e7083458a7ab2d6054e8a8ceea842a9d59d940e83c93ca1fa1d3812445"} Nov 28 00:38:11 crc kubenswrapper[3556]: I1128 00:38:11.793356 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:38:11 crc kubenswrapper[3556]: I1128 00:38:11.982029 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-catalog-content\") pod \"c797db77-4d3e-477a-a841-21016b7e5788\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " Nov 28 00:38:11 crc kubenswrapper[3556]: I1128 00:38:11.982161 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-utilities\") pod \"c797db77-4d3e-477a-a841-21016b7e5788\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " Nov 28 00:38:11 crc kubenswrapper[3556]: I1128 00:38:11.982239 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqr9s\" (UniqueName: \"kubernetes.io/projected/c797db77-4d3e-477a-a841-21016b7e5788-kube-api-access-jqr9s\") pod \"c797db77-4d3e-477a-a841-21016b7e5788\" (UID: \"c797db77-4d3e-477a-a841-21016b7e5788\") " Nov 28 00:38:11 crc kubenswrapper[3556]: I1128 00:38:11.982979 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-utilities" (OuterVolumeSpecName: "utilities") pod "c797db77-4d3e-477a-a841-21016b7e5788" (UID: "c797db77-4d3e-477a-a841-21016b7e5788"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:38:11 crc kubenswrapper[3556]: I1128 00:38:11.990263 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c797db77-4d3e-477a-a841-21016b7e5788-kube-api-access-jqr9s" (OuterVolumeSpecName: "kube-api-access-jqr9s") pod "c797db77-4d3e-477a-a841-21016b7e5788" (UID: "c797db77-4d3e-477a-a841-21016b7e5788"). InnerVolumeSpecName "kube-api-access-jqr9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.083782 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.083824 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jqr9s\" (UniqueName: \"kubernetes.io/projected/c797db77-4d3e-477a-a841-21016b7e5788-kube-api-access-jqr9s\") on node \"crc\" DevicePath \"\"" Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.441564 3556 generic.go:334] "Generic (PLEG): container finished" podID="c797db77-4d3e-477a-a841-21016b7e5788" containerID="5d7fef55f1d4ca7081e847f383c8ff2970bd02bfb632db82f62732a83597a63f" exitCode=0 Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.441610 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v6lc" event={"ID":"c797db77-4d3e-477a-a841-21016b7e5788","Type":"ContainerDied","Data":"5d7fef55f1d4ca7081e847f383c8ff2970bd02bfb632db82f62732a83597a63f"} Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.441645 3556 scope.go:117] "RemoveContainer" containerID="5d7fef55f1d4ca7081e847f383c8ff2970bd02bfb632db82f62732a83597a63f" Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.471498 3556 scope.go:117] "RemoveContainer" containerID="1a805edf53e5bc94ea8f5c047cfeb9432c907f93eb58c9a2f153555a07f70818" Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.552231 3556 scope.go:117] "RemoveContainer" containerID="fd113ff2dcabec3548f10ff5e642dfd261134d095f22f25557ae81aae1d0057c" Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.755578 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c797db77-4d3e-477a-a841-21016b7e5788" (UID: "c797db77-4d3e-477a-a841-21016b7e5788"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:38:12 crc kubenswrapper[3556]: I1128 00:38:12.792997 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c797db77-4d3e-477a-a841-21016b7e5788-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:38:13 crc kubenswrapper[3556]: I1128 00:38:13.447816 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v6lc" Nov 28 00:38:13 crc kubenswrapper[3556]: I1128 00:38:13.447890 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v6lc" event={"ID":"c797db77-4d3e-477a-a841-21016b7e5788","Type":"ContainerDied","Data":"fef9c09ee2373ac70096e557eaa8d57c3792703d835efe8d5e9dbb27a2dbd5be"} Nov 28 00:38:13 crc kubenswrapper[3556]: I1128 00:38:13.449788 3556 generic.go:334] "Generic (PLEG): container finished" podID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerID="732dbda516ae8ca038b951095b7e7d998e352086ac98b714284502df55f2f702" exitCode=0 Nov 28 00:38:13 crc kubenswrapper[3556]: I1128 00:38:13.449830 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwjg" event={"ID":"134d4fdf-1364-4f42-9c82-3c85c59217ac","Type":"ContainerDied","Data":"732dbda516ae8ca038b951095b7e7d998e352086ac98b714284502df55f2f702"} Nov 28 00:38:13 crc kubenswrapper[3556]: I1128 00:38:13.496323 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2v6lc"] Nov 28 00:38:13 crc kubenswrapper[3556]: I1128 00:38:13.500373 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2v6lc"] Nov 28 00:38:14 crc kubenswrapper[3556]: I1128 00:38:14.457138 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwjg" event={"ID":"134d4fdf-1364-4f42-9c82-3c85c59217ac","Type":"ContainerStarted","Data":"6141c9bcd562159b78607ff2643f7ff075b2d43871d8af23f9477a2587a60bc4"} Nov 28 00:38:14 crc kubenswrapper[3556]: I1128 00:38:14.479227 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptwjg" podStartSLOduration=4.218522632 podStartE2EDuration="21.479177801s" podCreationTimestamp="2025-11-28 00:37:53 +0000 UTC" firstStartedPulling="2025-11-28 00:37:56.661036242 +0000 UTC m=+1538.253268232" lastFinishedPulling="2025-11-28 00:38:13.921691411 +0000 UTC m=+1555.513923401" observedRunningTime="2025-11-28 00:38:14.473738397 +0000 UTC m=+1556.065970387" watchObservedRunningTime="2025-11-28 00:38:14.479177801 +0000 UTC m=+1556.071409801" Nov 28 00:38:14 crc kubenswrapper[3556]: I1128 00:38:14.941688 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c797db77-4d3e-477a-a841-21016b7e5788" path="/var/lib/kubelet/pods/c797db77-4d3e-477a-a841-21016b7e5788/volumes" Nov 28 00:38:18 crc kubenswrapper[3556]: I1128 00:38:18.720494 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:38:18 crc kubenswrapper[3556]: I1128 00:38:18.721057 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:38:18 crc kubenswrapper[3556]: I1128 00:38:18.721089 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:38:18 crc kubenswrapper[3556]: I1128 00:38:18.721127 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:38:18 crc kubenswrapper[3556]: I1128 00:38:18.721161 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:38:19 crc kubenswrapper[3556]: I1128 00:38:19.485874 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0c7c2afb-f325-4137-96a0-e217c2240fb1","Type":"ContainerStarted","Data":"e3bff8505fd103fdee90711eb891dfddced93d833eeebb8f536de20429557354"} Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.592780 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6755fc87b7-7vwps"] Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.594331 3556 topology_manager.go:215] "Topology Admit Handler" podUID="a6b10489-636f-4218-9cac-8fc73e3d3e34" podNamespace="service-telemetry" podName="default-snmp-webhook-6755fc87b7-7vwps" Nov 28 00:38:20 crc kubenswrapper[3556]: E1128 00:38:20.594599 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c797db77-4d3e-477a-a841-21016b7e5788" containerName="registry-server" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.594619 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c797db77-4d3e-477a-a841-21016b7e5788" containerName="registry-server" Nov 28 00:38:20 crc kubenswrapper[3556]: E1128 00:38:20.594637 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c797db77-4d3e-477a-a841-21016b7e5788" containerName="extract-utilities" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.594646 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c797db77-4d3e-477a-a841-21016b7e5788" containerName="extract-utilities" Nov 28 00:38:20 crc kubenswrapper[3556]: E1128 00:38:20.594660 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="c797db77-4d3e-477a-a841-21016b7e5788" containerName="extract-content" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.594669 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="c797db77-4d3e-477a-a841-21016b7e5788" containerName="extract-content" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.594821 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="c797db77-4d3e-477a-a841-21016b7e5788" containerName="registry-server" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.595335 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6755fc87b7-7vwps" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.602061 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6755fc87b7-7vwps"] Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.727844 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmb87\" (UniqueName: \"kubernetes.io/projected/a6b10489-636f-4218-9cac-8fc73e3d3e34-kube-api-access-nmb87\") pod \"default-snmp-webhook-6755fc87b7-7vwps\" (UID: \"a6b10489-636f-4218-9cac-8fc73e3d3e34\") " pod="service-telemetry/default-snmp-webhook-6755fc87b7-7vwps" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.829062 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-nmb87\" (UniqueName: \"kubernetes.io/projected/a6b10489-636f-4218-9cac-8fc73e3d3e34-kube-api-access-nmb87\") pod \"default-snmp-webhook-6755fc87b7-7vwps\" (UID: \"a6b10489-636f-4218-9cac-8fc73e3d3e34\") " pod="service-telemetry/default-snmp-webhook-6755fc87b7-7vwps" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.865869 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmb87\" (UniqueName: \"kubernetes.io/projected/a6b10489-636f-4218-9cac-8fc73e3d3e34-kube-api-access-nmb87\") pod \"default-snmp-webhook-6755fc87b7-7vwps\" (UID: \"a6b10489-636f-4218-9cac-8fc73e3d3e34\") " pod="service-telemetry/default-snmp-webhook-6755fc87b7-7vwps" Nov 28 00:38:20 crc kubenswrapper[3556]: I1128 00:38:20.909656 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6755fc87b7-7vwps" Nov 28 00:38:21 crc kubenswrapper[3556]: I1128 00:38:21.332994 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6755fc87b7-7vwps"] Nov 28 00:38:21 crc kubenswrapper[3556]: W1128 00:38:21.341253 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b10489_636f_4218_9cac_8fc73e3d3e34.slice/crio-8e60c5be4cca0e0d944f0e722e2230f1624f2846bb8c87267bdbdf6a2d44eeb6 WatchSource:0}: Error finding container 8e60c5be4cca0e0d944f0e722e2230f1624f2846bb8c87267bdbdf6a2d44eeb6: Status 404 returned error can't find the container with id 8e60c5be4cca0e0d944f0e722e2230f1624f2846bb8c87267bdbdf6a2d44eeb6 Nov 28 00:38:21 crc kubenswrapper[3556]: I1128 00:38:21.500682 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6755fc87b7-7vwps" event={"ID":"a6b10489-636f-4218-9cac-8fc73e3d3e34","Type":"ContainerStarted","Data":"8e60c5be4cca0e0d944f0e722e2230f1624f2846bb8c87267bdbdf6a2d44eeb6"} Nov 28 00:38:23 crc kubenswrapper[3556]: I1128 00:38:23.788209 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:38:23 crc kubenswrapper[3556]: I1128 00:38:23.789553 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:38:23 crc kubenswrapper[3556]: I1128 00:38:23.895716 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:38:24 crc kubenswrapper[3556]: I1128 00:38:24.520795 3556 generic.go:334] "Generic (PLEG): container finished" podID="0c7c2afb-f325-4137-96a0-e217c2240fb1" containerID="e3bff8505fd103fdee90711eb891dfddced93d833eeebb8f536de20429557354" exitCode=0 Nov 28 00:38:24 crc kubenswrapper[3556]: I1128 00:38:24.521667 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0c7c2afb-f325-4137-96a0-e217c2240fb1","Type":"ContainerDied","Data":"e3bff8505fd103fdee90711eb891dfddced93d833eeebb8f536de20429557354"} Nov 28 00:38:24 crc kubenswrapper[3556]: I1128 00:38:24.644045 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:38:24 crc kubenswrapper[3556]: I1128 00:38:24.682028 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptwjg"] Nov 28 00:38:26 crc kubenswrapper[3556]: I1128 00:38:26.532890 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptwjg" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerName="registry-server" containerID="cri-o://6141c9bcd562159b78607ff2643f7ff075b2d43871d8af23f9477a2587a60bc4" gracePeriod=2 Nov 28 00:38:27 crc kubenswrapper[3556]: I1128 00:38:27.564722 3556 generic.go:334] "Generic (PLEG): container finished" podID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerID="6141c9bcd562159b78607ff2643f7ff075b2d43871d8af23f9477a2587a60bc4" exitCode=0 Nov 28 00:38:27 crc kubenswrapper[3556]: I1128 00:38:27.564761 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwjg" event={"ID":"134d4fdf-1364-4f42-9c82-3c85c59217ac","Type":"ContainerDied","Data":"6141c9bcd562159b78607ff2643f7ff075b2d43871d8af23f9477a2587a60bc4"} Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.004654 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.004846 3556 topology_manager.go:215] "Topology Admit Handler" podUID="31086a87-8ffa-4122-9924-f46df3be87fd" podNamespace="service-telemetry" podName="alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.006641 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.009395 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.009624 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.009955 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.010261 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.012427 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-48dwv" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.015291 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.021854 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150036 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150366 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xjn\" (UniqueName: \"kubernetes.io/projected/31086a87-8ffa-4122-9924-f46df3be87fd-kube-api-access-g7xjn\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150415 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31086a87-8ffa-4122-9924-f46df3be87fd-config-out\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150447 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-web-config\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150492 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150526 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150557 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31086a87-8ffa-4122-9924-f46df3be87fd-tls-assets\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150596 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-config-volume\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.150640 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11c8d6c3-07df-4cd6-b501-f2f61dcad87b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11c8d6c3-07df-4cd6-b501-f2f61dcad87b\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251570 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251628 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251656 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31086a87-8ffa-4122-9924-f46df3be87fd-tls-assets\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251689 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-config-volume\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251724 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-11c8d6c3-07df-4cd6-b501-f2f61dcad87b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11c8d6c3-07df-4cd6-b501-f2f61dcad87b\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251758 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251778 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xjn\" (UniqueName: \"kubernetes.io/projected/31086a87-8ffa-4122-9924-f46df3be87fd-kube-api-access-g7xjn\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251807 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31086a87-8ffa-4122-9924-f46df3be87fd-config-out\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.251828 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-web-config\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.257473 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31086a87-8ffa-4122-9924-f46df3be87fd-tls-assets\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.257494 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.258437 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31086a87-8ffa-4122-9924-f46df3be87fd-config-out\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.261536 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-web-config\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.262538 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-config-volume\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.263020 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.263214 3556 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.263248 3556 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-11c8d6c3-07df-4cd6-b501-f2f61dcad87b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11c8d6c3-07df-4cd6-b501-f2f61dcad87b\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a836fa44833063f6f459e65eacf894ad0efeb8ee6fa75ca3c2ef07517e6a9ed2/globalmount\"" pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.269974 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xjn\" (UniqueName: \"kubernetes.io/projected/31086a87-8ffa-4122-9924-f46df3be87fd-kube-api-access-g7xjn\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.270172 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/31086a87-8ffa-4122-9924-f46df3be87fd-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.293084 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"pvc-11c8d6c3-07df-4cd6-b501-f2f61dcad87b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11c8d6c3-07df-4cd6-b501-f2f61dcad87b\") pod \"alertmanager-default-0\" (UID: \"31086a87-8ffa-4122-9924-f46df3be87fd\") " pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:28 crc kubenswrapper[3556]: I1128 00:38:28.325181 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.220889 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.368193 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz7bd\" (UniqueName: \"kubernetes.io/projected/134d4fdf-1364-4f42-9c82-3c85c59217ac-kube-api-access-hz7bd\") pod \"134d4fdf-1364-4f42-9c82-3c85c59217ac\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.368251 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-catalog-content\") pod \"134d4fdf-1364-4f42-9c82-3c85c59217ac\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.368289 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-utilities\") pod \"134d4fdf-1364-4f42-9c82-3c85c59217ac\" (UID: \"134d4fdf-1364-4f42-9c82-3c85c59217ac\") " Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.369227 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-utilities" (OuterVolumeSpecName: "utilities") pod "134d4fdf-1364-4f42-9c82-3c85c59217ac" (UID: "134d4fdf-1364-4f42-9c82-3c85c59217ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.377241 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134d4fdf-1364-4f42-9c82-3c85c59217ac-kube-api-access-hz7bd" (OuterVolumeSpecName: "kube-api-access-hz7bd") pod "134d4fdf-1364-4f42-9c82-3c85c59217ac" (UID: "134d4fdf-1364-4f42-9c82-3c85c59217ac"). InnerVolumeSpecName "kube-api-access-hz7bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.469594 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hz7bd\" (UniqueName: \"kubernetes.io/projected/134d4fdf-1364-4f42-9c82-3c85c59217ac-kube-api-access-hz7bd\") on node \"crc\" DevicePath \"\"" Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.469827 3556 reconciler_common.go:300] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.587926 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwjg" event={"ID":"134d4fdf-1364-4f42-9c82-3c85c59217ac","Type":"ContainerDied","Data":"8fc362cc3df9a250ed1a61863317a0e01b98f5f6fcdb9c6bbfb25c957351b5f7"} Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.587969 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptwjg" Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.587981 3556 scope.go:117] "RemoveContainer" containerID="6141c9bcd562159b78607ff2643f7ff075b2d43871d8af23f9477a2587a60bc4" Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.648924 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Nov 28 00:38:29 crc kubenswrapper[3556]: I1128 00:38:29.684270 3556 scope.go:117] "RemoveContainer" containerID="732dbda516ae8ca038b951095b7e7d998e352086ac98b714284502df55f2f702" Nov 28 00:38:29 crc kubenswrapper[3556]: W1128 00:38:29.685308 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31086a87_8ffa_4122_9924_f46df3be87fd.slice/crio-1f0e90f772d5914fffd181e159d14bc0db754b6c6e68809825b3c3f548d80fbf WatchSource:0}: Error finding container 1f0e90f772d5914fffd181e159d14bc0db754b6c6e68809825b3c3f548d80fbf: Status 404 returned error can't find the container with id 1f0e90f772d5914fffd181e159d14bc0db754b6c6e68809825b3c3f548d80fbf Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.003352 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "134d4fdf-1364-4f42-9c82-3c85c59217ac" (UID: "134d4fdf-1364-4f42-9c82-3c85c59217ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.083569 3556 reconciler_common.go:300] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134d4fdf-1364-4f42-9c82-3c85c59217ac-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.114993 3556 scope.go:117] "RemoveContainer" containerID="58a1b8445ccbd044f8f0b54ce9720dfaa8fdd13da665fe9c1b9800f0cf20bc97" Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.263615 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptwjg"] Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.281023 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptwjg"] Nov 28 00:38:30 crc kubenswrapper[3556]: E1128 00:38:30.345772 3556 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod134d4fdf_1364_4f42_9c82_3c85c59217ac.slice/crio-8fc362cc3df9a250ed1a61863317a0e01b98f5f6fcdb9c6bbfb25c957351b5f7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod134d4fdf_1364_4f42_9c82_3c85c59217ac.slice\": RecentStats: unable to find data in memory cache]" Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.597524 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"31086a87-8ffa-4122-9924-f46df3be87fd","Type":"ContainerStarted","Data":"1f0e90f772d5914fffd181e159d14bc0db754b6c6e68809825b3c3f548d80fbf"} Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.599127 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6755fc87b7-7vwps" event={"ID":"a6b10489-636f-4218-9cac-8fc73e3d3e34","Type":"ContainerStarted","Data":"6ed7c3ffebec64821a52a05cf1938769c757e78ad2c6218b87424423d6de8263"} Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.617778 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6755fc87b7-7vwps" podStartSLOduration=2.277617507 podStartE2EDuration="10.617720082s" podCreationTimestamp="2025-11-28 00:38:20 +0000 UTC" firstStartedPulling="2025-11-28 00:38:21.344410126 +0000 UTC m=+1562.936642106" lastFinishedPulling="2025-11-28 00:38:29.684512691 +0000 UTC m=+1571.276744681" observedRunningTime="2025-11-28 00:38:30.615625042 +0000 UTC m=+1572.207857032" watchObservedRunningTime="2025-11-28 00:38:30.617720082 +0000 UTC m=+1572.209952082" Nov 28 00:38:30 crc kubenswrapper[3556]: I1128 00:38:30.921339 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" path="/var/lib/kubelet/pods/134d4fdf-1364-4f42-9c82-3c85c59217ac/volumes" Nov 28 00:38:35 crc kubenswrapper[3556]: I1128 00:38:35.626652 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"31086a87-8ffa-4122-9924-f46df3be87fd","Type":"ContainerStarted","Data":"a22998c3d78b481f0587ff92701bb68b69048b1a116f4e44eedfd0d7e0f3155b"} Nov 28 00:38:35 crc kubenswrapper[3556]: I1128 00:38:35.628740 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0c7c2afb-f325-4137-96a0-e217c2240fb1","Type":"ContainerStarted","Data":"2027ff34aa38ce408458060360b424e28e9eb400b145831e56f23402b923b2be"} Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.241723 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68"] Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.242231 3556 topology_manager.go:215] "Topology Admit Handler" podUID="50a32d2a-8a43-446a-841a-2f4c8dc0932a" podNamespace="service-telemetry" podName="default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: E1128 00:38:39.242386 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerName="extract-content" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.242397 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerName="extract-content" Nov 28 00:38:39 crc kubenswrapper[3556]: E1128 00:38:39.242407 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerName="registry-server" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.242414 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerName="registry-server" Nov 28 00:38:39 crc kubenswrapper[3556]: E1128 00:38:39.242434 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerName="extract-utilities" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.242439 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerName="extract-utilities" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.242585 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="134d4fdf-1364-4f42-9c82-3c85c59217ac" containerName="registry-server" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.243354 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.245904 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.245960 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.246131 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.246153 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-df4zp" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.255178 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68"] Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.318191 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgmtl\" (UniqueName: \"kubernetes.io/projected/50a32d2a-8a43-446a-841a-2f4c8dc0932a-kube-api-access-jgmtl\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.318318 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.318357 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.318391 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/50a32d2a-8a43-446a-841a-2f4c8dc0932a-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.318503 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/50a32d2a-8a43-446a-841a-2f4c8dc0932a-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.419365 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.419435 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.419517 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/50a32d2a-8a43-446a-841a-2f4c8dc0932a-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: E1128 00:38:39.419554 3556 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.419638 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/50a32d2a-8a43-446a-841a-2f4c8dc0932a-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: E1128 00:38:39.419654 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls podName:50a32d2a-8a43-446a-841a-2f4c8dc0932a nodeName:}" failed. No retries permitted until 2025-11-28 00:38:39.919628785 +0000 UTC m=+1581.511860785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" (UID: "50a32d2a-8a43-446a-841a-2f4c8dc0932a") : secret "default-cloud1-coll-meter-proxy-tls" not found Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.420176 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/50a32d2a-8a43-446a-841a-2f4c8dc0932a-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.420501 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/50a32d2a-8a43-446a-841a-2f4c8dc0932a-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.420575 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-jgmtl\" (UniqueName: \"kubernetes.io/projected/50a32d2a-8a43-446a-841a-2f4c8dc0932a-kube-api-access-jgmtl\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.427931 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.452418 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgmtl\" (UniqueName: \"kubernetes.io/projected/50a32d2a-8a43-446a-841a-2f4c8dc0932a-kube-api-access-jgmtl\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.652832 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0c7c2afb-f325-4137-96a0-e217c2240fb1","Type":"ContainerStarted","Data":"4cecff78e67dea168f54db953c4b4f4c55dab7a9c75a25e2557277a8d7e98bbd"} Nov 28 00:38:39 crc kubenswrapper[3556]: I1128 00:38:39.928335 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:39 crc kubenswrapper[3556]: E1128 00:38:39.928476 3556 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Nov 28 00:38:39 crc kubenswrapper[3556]: E1128 00:38:39.928531 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls podName:50a32d2a-8a43-446a-841a-2f4c8dc0932a nodeName:}" failed. No retries permitted until 2025-11-28 00:38:40.928517724 +0000 UTC m=+1582.520749714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" (UID: "50a32d2a-8a43-446a-841a-2f4c8dc0932a") : secret "default-cloud1-coll-meter-proxy-tls" not found Nov 28 00:38:40 crc kubenswrapper[3556]: I1128 00:38:40.663162 3556 generic.go:334] "Generic (PLEG): container finished" podID="31086a87-8ffa-4122-9924-f46df3be87fd" containerID="a22998c3d78b481f0587ff92701bb68b69048b1a116f4e44eedfd0d7e0f3155b" exitCode=0 Nov 28 00:38:40 crc kubenswrapper[3556]: I1128 00:38:40.663239 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"31086a87-8ffa-4122-9924-f46df3be87fd","Type":"ContainerDied","Data":"a22998c3d78b481f0587ff92701bb68b69048b1a116f4e44eedfd0d7e0f3155b"} Nov 28 00:38:40 crc kubenswrapper[3556]: I1128 00:38:40.938556 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:40 crc kubenswrapper[3556]: I1128 00:38:40.943507 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a32d2a-8a43-446a-841a-2f4c8dc0932a-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68\" (UID: \"50a32d2a-8a43-446a-841a-2f4c8dc0932a\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.063247 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.191743 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj"] Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.191849 3556 topology_manager.go:215] "Topology Admit Handler" podUID="fc2ffb90-6204-4abc-90b8-1f67c8086a99" podNamespace="service-telemetry" podName="default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.192853 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.196003 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.196171 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.218552 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj"] Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.333323 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68"] Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.347315 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fc2ffb90-6204-4abc-90b8-1f67c8086a99-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.347375 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.347398 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.347436 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfl9\" (UniqueName: \"kubernetes.io/projected/fc2ffb90-6204-4abc-90b8-1f67c8086a99-kube-api-access-cqfl9\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.347461 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fc2ffb90-6204-4abc-90b8-1f67c8086a99-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.448828 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fc2ffb90-6204-4abc-90b8-1f67c8086a99-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.448883 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.448910 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.448954 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-cqfl9\" (UniqueName: \"kubernetes.io/projected/fc2ffb90-6204-4abc-90b8-1f67c8086a99-kube-api-access-cqfl9\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.448978 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fc2ffb90-6204-4abc-90b8-1f67c8086a99-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: E1128 00:38:41.449338 3556 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 28 00:38:41 crc kubenswrapper[3556]: E1128 00:38:41.449423 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls podName:fc2ffb90-6204-4abc-90b8-1f67c8086a99 nodeName:}" failed. No retries permitted until 2025-11-28 00:38:41.94940285 +0000 UTC m=+1583.541634840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" (UID: "fc2ffb90-6204-4abc-90b8-1f67c8086a99") : secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.449555 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fc2ffb90-6204-4abc-90b8-1f67c8086a99-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.450117 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fc2ffb90-6204-4abc-90b8-1f67c8086a99-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.454967 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.467849 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqfl9\" (UniqueName: \"kubernetes.io/projected/fc2ffb90-6204-4abc-90b8-1f67c8086a99-kube-api-access-cqfl9\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.669786 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" event={"ID":"50a32d2a-8a43-446a-841a-2f4c8dc0932a","Type":"ContainerStarted","Data":"98d06114cce4c9ef31e339141f2d6198d52a89058b36da9768b6ab2c4a74db66"} Nov 28 00:38:41 crc kubenswrapper[3556]: I1128 00:38:41.954982 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:41 crc kubenswrapper[3556]: E1128 00:38:41.955178 3556 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 28 00:38:41 crc kubenswrapper[3556]: E1128 00:38:41.955244 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls podName:fc2ffb90-6204-4abc-90b8-1f67c8086a99 nodeName:}" failed. No retries permitted until 2025-11-28 00:38:42.955223033 +0000 UTC m=+1584.547455033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" (UID: "fc2ffb90-6204-4abc-90b8-1f67c8086a99") : secret "default-cloud1-ceil-meter-proxy-tls" not found Nov 28 00:38:42 crc kubenswrapper[3556]: I1128 00:38:42.972606 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:43 crc kubenswrapper[3556]: I1128 00:38:42.994646 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc2ffb90-6204-4abc-90b8-1f67c8086a99-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-7866965967-qtslj\" (UID: \"fc2ffb90-6204-4abc-90b8-1f67c8086a99\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:43 crc kubenswrapper[3556]: I1128 00:38:43.030106 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.220172 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv"] Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.220608 3556 topology_manager.go:215] "Topology Admit Handler" podUID="fdf35e1b-f7c9-41e2-8ae8-308f80623968" podNamespace="service-telemetry" podName="default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.221961 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.225396 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.225430 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.239750 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv"] Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.403774 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fdf35e1b-f7c9-41e2-8ae8-308f80623968-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.403872 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpwg6\" (UniqueName: \"kubernetes.io/projected/fdf35e1b-f7c9-41e2-8ae8-308f80623968-kube-api-access-vpwg6\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.403997 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fdf35e1b-f7c9-41e2-8ae8-308f80623968-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.404106 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.404148 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.505837 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fdf35e1b-f7c9-41e2-8ae8-308f80623968-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.506124 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.506151 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.506180 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fdf35e1b-f7c9-41e2-8ae8-308f80623968-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.506235 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-vpwg6\" (UniqueName: \"kubernetes.io/projected/fdf35e1b-f7c9-41e2-8ae8-308f80623968-kube-api-access-vpwg6\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: E1128 00:38:45.506276 3556 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Nov 28 00:38:45 crc kubenswrapper[3556]: E1128 00:38:45.506347 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls podName:fdf35e1b-f7c9-41e2-8ae8-308f80623968 nodeName:}" failed. No retries permitted until 2025-11-28 00:38:46.006323513 +0000 UTC m=+1587.598555503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" (UID: "fdf35e1b-f7c9-41e2-8ae8-308f80623968") : secret "default-cloud1-sens-meter-proxy-tls" not found Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.506685 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fdf35e1b-f7c9-41e2-8ae8-308f80623968-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.506831 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fdf35e1b-f7c9-41e2-8ae8-308f80623968-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.517115 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:45 crc kubenswrapper[3556]: I1128 00:38:45.532998 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpwg6\" (UniqueName: \"kubernetes.io/projected/fdf35e1b-f7c9-41e2-8ae8-308f80623968-kube-api-access-vpwg6\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:46 crc kubenswrapper[3556]: I1128 00:38:46.013747 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:46 crc kubenswrapper[3556]: E1128 00:38:46.014414 3556 secret.go:194] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Nov 28 00:38:46 crc kubenswrapper[3556]: E1128 00:38:46.014464 3556 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls podName:fdf35e1b-f7c9-41e2-8ae8-308f80623968 nodeName:}" failed. No retries permitted until 2025-11-28 00:38:47.014448383 +0000 UTC m=+1588.606680373 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" (UID: "fdf35e1b-f7c9-41e2-8ae8-308f80623968") : secret "default-cloud1-sens-meter-proxy-tls" not found Nov 28 00:38:47 crc kubenswrapper[3556]: I1128 00:38:47.024510 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:47 crc kubenswrapper[3556]: I1128 00:38:47.030623 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fdf35e1b-f7c9-41e2-8ae8-308f80623968-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv\" (UID: \"fdf35e1b-f7c9-41e2-8ae8-308f80623968\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:47 crc kubenswrapper[3556]: I1128 00:38:47.037044 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" Nov 28 00:38:48 crc kubenswrapper[3556]: I1128 00:38:48.030864 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv"] Nov 28 00:38:48 crc kubenswrapper[3556]: I1128 00:38:48.123695 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj"] Nov 28 00:38:48 crc kubenswrapper[3556]: W1128 00:38:48.126852 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc2ffb90_6204_4abc_90b8_1f67c8086a99.slice/crio-5f86264d6f27b3151ff6e5548f969d24549a504af1d076ff7f732106d52ca8ee WatchSource:0}: Error finding container 5f86264d6f27b3151ff6e5548f969d24549a504af1d076ff7f732106d52ca8ee: Status 404 returned error can't find the container with id 5f86264d6f27b3151ff6e5548f969d24549a504af1d076ff7f732106d52ca8ee Nov 28 00:38:48 crc kubenswrapper[3556]: I1128 00:38:48.726695 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" event={"ID":"fdf35e1b-f7c9-41e2-8ae8-308f80623968","Type":"ContainerStarted","Data":"5202bfd8c71f25ab3e324c5bf46de55a52107ec003a4071238cedd2b62c7ae07"} Nov 28 00:38:48 crc kubenswrapper[3556]: I1128 00:38:48.738506 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"31086a87-8ffa-4122-9924-f46df3be87fd","Type":"ContainerStarted","Data":"6fa421e7e8feffdcbdfb969f64030fac021f459a3f0a8cf0163be80105a598d9"} Nov 28 00:38:48 crc kubenswrapper[3556]: I1128 00:38:48.743589 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"0c7c2afb-f325-4137-96a0-e217c2240fb1","Type":"ContainerStarted","Data":"da8add087a89f5b0155dac48ec0ecd2ac9c1f99d0af4986c63a376d58603b2cb"} Nov 28 00:38:48 crc kubenswrapper[3556]: I1128 00:38:48.749120 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" event={"ID":"50a32d2a-8a43-446a-841a-2f4c8dc0932a","Type":"ContainerStarted","Data":"53fb7553baf11d006086fd0f5b43897ca71b661c9e9fbdf3ae5c49f31559475e"} Nov 28 00:38:48 crc kubenswrapper[3556]: I1128 00:38:48.753651 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" event={"ID":"fc2ffb90-6204-4abc-90b8-1f67c8086a99","Type":"ContainerStarted","Data":"5f86264d6f27b3151ff6e5548f969d24549a504af1d076ff7f732106d52ca8ee"} Nov 28 00:38:48 crc kubenswrapper[3556]: I1128 00:38:48.781052 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.30304793 podStartE2EDuration="45.780789817s" podCreationTimestamp="2025-11-28 00:38:03 +0000 UTC" firstStartedPulling="2025-11-28 00:38:06.302398881 +0000 UTC m=+1547.894630871" lastFinishedPulling="2025-11-28 00:38:47.780140768 +0000 UTC m=+1589.372372758" observedRunningTime="2025-11-28 00:38:48.765413321 +0000 UTC m=+1590.357645331" watchObservedRunningTime="2025-11-28 00:38:48.780789817 +0000 UTC m=+1590.373021807" Nov 28 00:38:49 crc kubenswrapper[3556]: I1128 00:38:49.763691 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" event={"ID":"fc2ffb90-6204-4abc-90b8-1f67c8086a99","Type":"ContainerStarted","Data":"e670e8c22114fb853f985f1b778aa6f638d7c816f39242306bc96c357a057877"} Nov 28 00:38:49 crc kubenswrapper[3556]: I1128 00:38:49.766439 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" event={"ID":"fdf35e1b-f7c9-41e2-8ae8-308f80623968","Type":"ContainerStarted","Data":"d9bae37050c88937eb7f1042f952f002b8a48f3b339d3e8591f36a129cea5e2d"} Nov 28 00:38:51 crc kubenswrapper[3556]: I1128 00:38:51.080977 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Nov 28 00:38:51 crc kubenswrapper[3556]: I1128 00:38:51.081373 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Nov 28 00:38:51 crc kubenswrapper[3556]: I1128 00:38:51.180209 3556 kubelet.go:2533] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Nov 28 00:38:51 crc kubenswrapper[3556]: I1128 00:38:51.878367 3556 kubelet.go:2533] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Nov 28 00:38:52 crc kubenswrapper[3556]: I1128 00:38:52.670029 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:38:52 crc kubenswrapper[3556]: I1128 00:38:52.670103 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:38:52 crc kubenswrapper[3556]: I1128 00:38:52.790797 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"31086a87-8ffa-4122-9924-f46df3be87fd","Type":"ContainerStarted","Data":"7e60c684724e786f3371e5e170514212d6a310e0bebde1459e097554e4ee6f90"} Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.765986 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w"] Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.766137 3556 topology_manager.go:215] "Topology Admit Handler" podUID="e14c79b6-4eba-42d3-89a1-a72e8f3e58b9" podNamespace="service-telemetry" podName="default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.766913 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.771132 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.771965 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.775603 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w"] Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.834937 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.835000 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2dnm\" (UniqueName: \"kubernetes.io/projected/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-kube-api-access-n2dnm\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.835115 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.835350 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.936575 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.936643 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.936678 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-n2dnm\" (UniqueName: \"kubernetes.io/projected/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-kube-api-access-n2dnm\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.936707 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.937349 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.937778 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.943040 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:53 crc kubenswrapper[3556]: I1128 00:38:53.954609 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2dnm\" (UniqueName: \"kubernetes.io/projected/e14c79b6-4eba-42d3-89a1-a72e8f3e58b9-kube-api-access-n2dnm\") pod \"default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w\" (UID: \"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.165963 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.747413 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w"] Nov 28 00:38:54 crc kubenswrapper[3556]: W1128 00:38:54.756602 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode14c79b6_4eba_42d3_89a1_a72e8f3e58b9.slice/crio-fc3633d9a3d2eb6652ffa2e97d4f457af6f304418ae88f0e2ba6f9cbeb1fec19 WatchSource:0}: Error finding container fc3633d9a3d2eb6652ffa2e97d4f457af6f304418ae88f0e2ba6f9cbeb1fec19: Status 404 returned error can't find the container with id fc3633d9a3d2eb6652ffa2e97d4f457af6f304418ae88f0e2ba6f9cbeb1fec19 Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.786876 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk"] Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.786999 3556 topology_manager.go:215] "Topology Admit Handler" podUID="0598e713-9aa3-4365-9018-380ac3b9976d" podNamespace="service-telemetry" podName="default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.787833 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.791479 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.837710 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk"] Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.877626 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0598e713-9aa3-4365-9018-380ac3b9976d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.877714 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffqp7\" (UniqueName: \"kubernetes.io/projected/0598e713-9aa3-4365-9018-380ac3b9976d-kube-api-access-ffqp7\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.877743 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0598e713-9aa3-4365-9018-380ac3b9976d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.877773 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0598e713-9aa3-4365-9018-380ac3b9976d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.937743 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" event={"ID":"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9","Type":"ContainerStarted","Data":"fc3633d9a3d2eb6652ffa2e97d4f457af6f304418ae88f0e2ba6f9cbeb1fec19"} Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.947774 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"31086a87-8ffa-4122-9924-f46df3be87fd","Type":"ContainerStarted","Data":"75f104d2c5ed2b59b5448cead68d9a8ed5ab18a3939857e04cb6624d0bcdcf13"} Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.983985 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0598e713-9aa3-4365-9018-380ac3b9976d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.984093 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-ffqp7\" (UniqueName: \"kubernetes.io/projected/0598e713-9aa3-4365-9018-380ac3b9976d-kube-api-access-ffqp7\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.984129 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0598e713-9aa3-4365-9018-380ac3b9976d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.984164 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0598e713-9aa3-4365-9018-380ac3b9976d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.986342 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0598e713-9aa3-4365-9018-380ac3b9976d-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.986833 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=16.445994998 podStartE2EDuration="28.986790448s" podCreationTimestamp="2025-11-28 00:38:26 +0000 UTC" firstStartedPulling="2025-11-28 00:38:40.664842777 +0000 UTC m=+1582.257074767" lastFinishedPulling="2025-11-28 00:38:53.205638227 +0000 UTC m=+1594.797870217" observedRunningTime="2025-11-28 00:38:54.985148238 +0000 UTC m=+1596.577380228" watchObservedRunningTime="2025-11-28 00:38:54.986790448 +0000 UTC m=+1596.579022438" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.988913 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0598e713-9aa3-4365-9018-380ac3b9976d-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:54 crc kubenswrapper[3556]: I1128 00:38:54.996350 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/0598e713-9aa3-4365-9018-380ac3b9976d-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:55 crc kubenswrapper[3556]: I1128 00:38:55.027580 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffqp7\" (UniqueName: \"kubernetes.io/projected/0598e713-9aa3-4365-9018-380ac3b9976d-kube-api-access-ffqp7\") pod \"default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk\" (UID: \"0598e713-9aa3-4365-9018-380ac3b9976d\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:55 crc kubenswrapper[3556]: I1128 00:38:55.180575 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" Nov 28 00:38:55 crc kubenswrapper[3556]: I1128 00:38:55.702677 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk"] Nov 28 00:38:55 crc kubenswrapper[3556]: I1128 00:38:55.954092 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" event={"ID":"0598e713-9aa3-4365-9018-380ac3b9976d","Type":"ContainerStarted","Data":"261dff51190c7560a90e17551a52acd8ed0f7e23c546e76e33886cd92035b426"} Nov 28 00:38:56 crc kubenswrapper[3556]: I1128 00:38:55.955185 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" event={"ID":"fc2ffb90-6204-4abc-90b8-1f67c8086a99","Type":"ContainerStarted","Data":"f58d08f6e1a537e8b6cb4f9d18ebd0008853f4129c3ac4189110a1b300fb5dd3"} Nov 28 00:38:56 crc kubenswrapper[3556]: I1128 00:38:55.957552 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" event={"ID":"fdf35e1b-f7c9-41e2-8ae8-308f80623968","Type":"ContainerStarted","Data":"04b196ac1914b9c611bfa58736e882da67a079697606f4de5d8981624c4896d5"} Nov 28 00:38:56 crc kubenswrapper[3556]: I1128 00:38:55.960733 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" event={"ID":"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9","Type":"ContainerStarted","Data":"edac7b411355e8e8bb1f0c81e36e2097dbceb0fcd7a0da28b8912881ec1d7380"} Nov 28 00:38:56 crc kubenswrapper[3556]: I1128 00:38:55.973212 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" event={"ID":"50a32d2a-8a43-446a-841a-2f4c8dc0932a","Type":"ContainerStarted","Data":"914c5a5849cbdf99b23de32441a1977efd04005378f0349efa0e26b44c530c0a"} Nov 28 00:38:56 crc kubenswrapper[3556]: I1128 00:38:56.984775 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" event={"ID":"0598e713-9aa3-4365-9018-380ac3b9976d","Type":"ContainerStarted","Data":"5c10db132188714ec842d73c97fe8e99b72681f65ccab1a2ad8009a67c3f0577"} Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.013171 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" event={"ID":"fc2ffb90-6204-4abc-90b8-1f67c8086a99","Type":"ContainerStarted","Data":"dde32cc5d69d9b51496a4d2c663a23207effb040f5d19c760a2f8f6e9e6e8bb1"} Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.015412 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" event={"ID":"fdf35e1b-f7c9-41e2-8ae8-308f80623968","Type":"ContainerStarted","Data":"2eb23ad8f13af62930f0b9c8e83ad24a123dbb552df96bce9504b189e59574b6"} Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.017185 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" event={"ID":"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9","Type":"ContainerStarted","Data":"802cc0e384f090f81d1febe60b7926e072c45eb2954762b1841806dca8a69064"} Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.019337 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" event={"ID":"50a32d2a-8a43-446a-841a-2f4c8dc0932a","Type":"ContainerStarted","Data":"b0eae75195314b29b79865f33dfa37dcfc6dcce4b51f10edd5dcbee7c940ccfc"} Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.021703 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" event={"ID":"0598e713-9aa3-4365-9018-380ac3b9976d","Type":"ContainerStarted","Data":"dfdaf318db43bec392644f47044a112bf6ffa078a0546ae733f0e5216a0cba54"} Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.034291 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" podStartSLOduration=7.838951347 podStartE2EDuration="20.034226717s" podCreationTimestamp="2025-11-28 00:38:41 +0000 UTC" firstStartedPulling="2025-11-28 00:38:48.135853145 +0000 UTC m=+1589.728085135" lastFinishedPulling="2025-11-28 00:39:00.331128515 +0000 UTC m=+1601.923360505" observedRunningTime="2025-11-28 00:39:01.029097143 +0000 UTC m=+1602.621329133" watchObservedRunningTime="2025-11-28 00:39:01.034226717 +0000 UTC m=+1602.626458717" Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.084159 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" podStartSLOduration=3.712490509 podStartE2EDuration="16.084005534s" podCreationTimestamp="2025-11-28 00:38:45 +0000 UTC" firstStartedPulling="2025-11-28 00:38:48.056934318 +0000 UTC m=+1589.649166308" lastFinishedPulling="2025-11-28 00:39:00.428449343 +0000 UTC m=+1602.020681333" observedRunningTime="2025-11-28 00:39:01.058744497 +0000 UTC m=+1602.650976487" watchObservedRunningTime="2025-11-28 00:39:01.084005534 +0000 UTC m=+1602.676237534" Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.086948 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" podStartSLOduration=3.010052032 podStartE2EDuration="22.086900324s" podCreationTimestamp="2025-11-28 00:38:39 +0000 UTC" firstStartedPulling="2025-11-28 00:38:41.3769695 +0000 UTC m=+1582.969201490" lastFinishedPulling="2025-11-28 00:39:00.453817792 +0000 UTC m=+1602.046049782" observedRunningTime="2025-11-28 00:39:01.080236221 +0000 UTC m=+1602.672468211" watchObservedRunningTime="2025-11-28 00:39:01.086900324 +0000 UTC m=+1602.679132324" Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.133309 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" podStartSLOduration=2.568065036 podStartE2EDuration="8.133248426s" podCreationTimestamp="2025-11-28 00:38:53 +0000 UTC" firstStartedPulling="2025-11-28 00:38:54.760662176 +0000 UTC m=+1596.352894166" lastFinishedPulling="2025-11-28 00:39:00.325845576 +0000 UTC m=+1601.918077556" observedRunningTime="2025-11-28 00:39:01.102597267 +0000 UTC m=+1602.694829257" watchObservedRunningTime="2025-11-28 00:39:01.133248426 +0000 UTC m=+1602.725480436" Nov 28 00:39:01 crc kubenswrapper[3556]: I1128 00:39:01.134746 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" podStartSLOduration=2.465716439 podStartE2EDuration="7.134720322s" podCreationTimestamp="2025-11-28 00:38:54 +0000 UTC" firstStartedPulling="2025-11-28 00:38:55.712619406 +0000 UTC m=+1597.304851396" lastFinishedPulling="2025-11-28 00:39:00.381623289 +0000 UTC m=+1601.973855279" observedRunningTime="2025-11-28 00:39:01.128236663 +0000 UTC m=+1602.720468663" watchObservedRunningTime="2025-11-28 00:39:01.134720322 +0000 UTC m=+1602.726952332" Nov 28 00:39:06 crc kubenswrapper[3556]: I1128 00:39:06.208797 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-zx6wj"] Nov 28 00:39:06 crc kubenswrapper[3556]: I1128 00:39:06.209219 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" podUID="38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" containerName="default-interconnect" containerID="cri-o://475dcb8abb6d7d641d65f2078458a3e7bb4658b68151ec51de543baf01e84c88" gracePeriod=30 Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.074999 3556 generic.go:334] "Generic (PLEG): container finished" podID="50a32d2a-8a43-446a-841a-2f4c8dc0932a" containerID="914c5a5849cbdf99b23de32441a1977efd04005378f0349efa0e26b44c530c0a" exitCode=0 Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.075173 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" event={"ID":"50a32d2a-8a43-446a-841a-2f4c8dc0932a","Type":"ContainerDied","Data":"914c5a5849cbdf99b23de32441a1977efd04005378f0349efa0e26b44c530c0a"} Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.076300 3556 scope.go:117] "RemoveContainer" containerID="914c5a5849cbdf99b23de32441a1977efd04005378f0349efa0e26b44c530c0a" Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.079263 3556 generic.go:334] "Generic (PLEG): container finished" podID="38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" containerID="475dcb8abb6d7d641d65f2078458a3e7bb4658b68151ec51de543baf01e84c88" exitCode=0 Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.079355 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" event={"ID":"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8","Type":"ContainerDied","Data":"475dcb8abb6d7d641d65f2078458a3e7bb4658b68151ec51de543baf01e84c88"} Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.081690 3556 generic.go:334] "Generic (PLEG): container finished" podID="0598e713-9aa3-4365-9018-380ac3b9976d" containerID="5c10db132188714ec842d73c97fe8e99b72681f65ccab1a2ad8009a67c3f0577" exitCode=0 Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.081756 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" event={"ID":"0598e713-9aa3-4365-9018-380ac3b9976d","Type":"ContainerDied","Data":"5c10db132188714ec842d73c97fe8e99b72681f65ccab1a2ad8009a67c3f0577"} Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.082208 3556 scope.go:117] "RemoveContainer" containerID="5c10db132188714ec842d73c97fe8e99b72681f65ccab1a2ad8009a67c3f0577" Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.085038 3556 generic.go:334] "Generic (PLEG): container finished" podID="fc2ffb90-6204-4abc-90b8-1f67c8086a99" containerID="f58d08f6e1a537e8b6cb4f9d18ebd0008853f4129c3ac4189110a1b300fb5dd3" exitCode=0 Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.085103 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" event={"ID":"fc2ffb90-6204-4abc-90b8-1f67c8086a99","Type":"ContainerDied","Data":"f58d08f6e1a537e8b6cb4f9d18ebd0008853f4129c3ac4189110a1b300fb5dd3"} Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.085542 3556 scope.go:117] "RemoveContainer" containerID="f58d08f6e1a537e8b6cb4f9d18ebd0008853f4129c3ac4189110a1b300fb5dd3" Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.090543 3556 generic.go:334] "Generic (PLEG): container finished" podID="e14c79b6-4eba-42d3-89a1-a72e8f3e58b9" containerID="edac7b411355e8e8bb1f0c81e36e2097dbceb0fcd7a0da28b8912881ec1d7380" exitCode=0 Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.090586 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" event={"ID":"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9","Type":"ContainerDied","Data":"edac7b411355e8e8bb1f0c81e36e2097dbceb0fcd7a0da28b8912881ec1d7380"} Nov 28 00:39:08 crc kubenswrapper[3556]: I1128 00:39:08.090921 3556 scope.go:117] "RemoveContainer" containerID="edac7b411355e8e8bb1f0c81e36e2097dbceb0fcd7a0da28b8912881ec1d7380" Nov 28 00:39:09 crc kubenswrapper[3556]: I1128 00:39:09.098437 3556 generic.go:334] "Generic (PLEG): container finished" podID="fdf35e1b-f7c9-41e2-8ae8-308f80623968" containerID="04b196ac1914b9c611bfa58736e882da67a079697606f4de5d8981624c4896d5" exitCode=0 Nov 28 00:39:09 crc kubenswrapper[3556]: I1128 00:39:09.098477 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" event={"ID":"fdf35e1b-f7c9-41e2-8ae8-308f80623968","Type":"ContainerDied","Data":"04b196ac1914b9c611bfa58736e882da67a079697606f4de5d8981624c4896d5"} Nov 28 00:39:09 crc kubenswrapper[3556]: I1128 00:39:09.099130 3556 scope.go:117] "RemoveContainer" containerID="04b196ac1914b9c611bfa58736e882da67a079697606f4de5d8981624c4896d5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.110964 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" event={"ID":"fdf35e1b-f7c9-41e2-8ae8-308f80623968","Type":"ContainerStarted","Data":"0328ccf4ba09768cd341c0230e69d2e1c1fddc0d83055378ed675a3f0cf03299"} Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.319059 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.319190 3556 topology_manager.go:215] "Topology Admit Handler" podUID="a0504edd-5743-4726-bf65-8b0dfb5c29da" podNamespace="service-telemetry" podName="qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.319897 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.323461 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.323744 3556 reflector.go:351] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.332759 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.349654 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.412815 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-5spw5"] Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.413233 3556 topology_manager.go:215] "Topology Admit Handler" podUID="9677b702-b043-4d23-bb90-11259df9be04" podNamespace="service-telemetry" podName="default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: E1128 00:39:11.413545 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" containerName="default-interconnect" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.417172 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" containerName="default-interconnect" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.417479 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" containerName="default-interconnect" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.418169 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.419274 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/a0504edd-5743-4726-bf65-8b0dfb5c29da-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.419352 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/a0504edd-5743-4726-bf65-8b0dfb5c29da-qdr-test-config\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.419396 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv5lg\" (UniqueName: \"kubernetes.io/projected/a0504edd-5743-4726-bf65-8b0dfb5c29da-kube-api-access-wv5lg\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.431339 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-5spw5"] Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.520583 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-ca\") pod \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.520889 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-ca\") pod \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.521514 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-config\") pod \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.522233 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" (UID: "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.522534 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-credentials\") pod \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.522915 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2tz6\" (UniqueName: \"kubernetes.io/projected/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-kube-api-access-t2tz6\") pod \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.523120 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-credentials\") pod \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.523214 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-users\") pod \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\" (UID: \"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8\") " Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.523472 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/a0504edd-5743-4726-bf65-8b0dfb5c29da-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.523650 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvp7\" (UniqueName: \"kubernetes.io/projected/9677b702-b043-4d23-bb90-11259df9be04-kube-api-access-dqvp7\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.523771 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/9677b702-b043-4d23-bb90-11259df9be04-sasl-config\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.523873 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/a0504edd-5743-4726-bf65-8b0dfb5c29da-qdr-test-config\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.523960 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-sasl-users\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.524299 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.524396 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.524493 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.524590 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-wv5lg\" (UniqueName: \"kubernetes.io/projected/a0504edd-5743-4726-bf65-8b0dfb5c29da-kube-api-access-wv5lg\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.524689 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.524782 3556 reconciler_common.go:300] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-config\") on node \"crc\" DevicePath \"\"" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.526038 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/a0504edd-5743-4726-bf65-8b0dfb5c29da-qdr-test-config\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.526092 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" (UID: "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.527299 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" (UID: "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.529240 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" (UID: "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.529923 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-kube-api-access-t2tz6" (OuterVolumeSpecName: "kube-api-access-t2tz6") pod "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" (UID: "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8"). InnerVolumeSpecName "kube-api-access-t2tz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.530145 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" (UID: "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.531639 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/a0504edd-5743-4726-bf65-8b0dfb5c29da-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.540343 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" (UID: "38a1a8aa-ff87-4138-bee6-376ab9e7c2d8"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.545936 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv5lg\" (UniqueName: \"kubernetes.io/projected/a0504edd-5743-4726-bf65-8b0dfb5c29da-kube-api-access-wv5lg\") pod \"qdr-test\" (UID: \"a0504edd-5743-4726-bf65-8b0dfb5c29da\") " pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.625934 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/9677b702-b043-4d23-bb90-11259df9be04-sasl-config\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.625977 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-sasl-users\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626027 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626053 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626078 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626119 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626149 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-dqvp7\" (UniqueName: \"kubernetes.io/projected/9677b702-b043-4d23-bb90-11259df9be04-kube-api-access-dqvp7\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626194 3556 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626210 3556 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626225 3556 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626238 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t2tz6\" (UniqueName: \"kubernetes.io/projected/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-kube-api-access-t2tz6\") on node \"crc\" DevicePath \"\"" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626251 3556 reconciler_common.go:300] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.626262 3556 reconciler_common.go:300] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8-sasl-users\") on node \"crc\" DevicePath \"\"" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.627206 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/9677b702-b043-4d23-bb90-11259df9be04-sasl-config\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.632484 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-sasl-users\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.632638 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-inter-router-credentials\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.633127 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-openstack-credentials\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.634724 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-inter-router-ca\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.642153 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/9677b702-b043-4d23-bb90-11259df9be04-default-interconnect-openstack-ca\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.646651 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqvp7\" (UniqueName: \"kubernetes.io/projected/9677b702-b043-4d23-bb90-11259df9be04-kube-api-access-dqvp7\") pod \"default-interconnect-84dbc59cb8-5spw5\" (UID: \"9677b702-b043-4d23-bb90-11259df9be04\") " pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.658701 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.732811 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" Nov 28 00:39:11 crc kubenswrapper[3556]: I1128 00:39:11.946300 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-5spw5"] Nov 28 00:39:11 crc kubenswrapper[3556]: W1128 00:39:11.953260 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9677b702_b043_4d23_bb90_11259df9be04.slice/crio-814c73bf8f86bff45503b623eb7f3a7c5716df8e0c41768ac7e5996041c12160 WatchSource:0}: Error finding container 814c73bf8f86bff45503b623eb7f3a7c5716df8e0c41768ac7e5996041c12160: Status 404 returned error can't find the container with id 814c73bf8f86bff45503b623eb7f3a7c5716df8e0c41768ac7e5996041c12160 Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.079343 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Nov 28 00:39:12 crc kubenswrapper[3556]: W1128 00:39:12.081382 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0504edd_5743_4726_bf65_8b0dfb5c29da.slice/crio-cd203626b0f101523c44727946ea74315ced28d148be996b5866853424da35ea WatchSource:0}: Error finding container cd203626b0f101523c44727946ea74315ced28d148be996b5866853424da35ea: Status 404 returned error can't find the container with id cd203626b0f101523c44727946ea74315ced28d148be996b5866853424da35ea Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.125419 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" event={"ID":"fc2ffb90-6204-4abc-90b8-1f67c8086a99","Type":"ContainerStarted","Data":"825ec95d8174289c8a9394817d509aedd38a678ea50e4a3b019ab46855309717"} Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.129404 3556 generic.go:334] "Generic (PLEG): container finished" podID="fdf35e1b-f7c9-41e2-8ae8-308f80623968" containerID="0328ccf4ba09768cd341c0230e69d2e1c1fddc0d83055378ed675a3f0cf03299" exitCode=0 Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.129448 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" event={"ID":"fdf35e1b-f7c9-41e2-8ae8-308f80623968","Type":"ContainerDied","Data":"0328ccf4ba09768cd341c0230e69d2e1c1fddc0d83055378ed675a3f0cf03299"} Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.129487 3556 scope.go:117] "RemoveContainer" containerID="04b196ac1914b9c611bfa58736e882da67a079697606f4de5d8981624c4896d5" Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.129939 3556 scope.go:117] "RemoveContainer" containerID="0328ccf4ba09768cd341c0230e69d2e1c1fddc0d83055378ed675a3f0cf03299" Nov 28 00:39:12 crc kubenswrapper[3556]: E1128 00:39:12.130285 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv_service-telemetry(fdf35e1b-f7c9-41e2-8ae8-308f80623968)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" podUID="fdf35e1b-f7c9-41e2-8ae8-308f80623968" Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.134223 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" event={"ID":"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9","Type":"ContainerStarted","Data":"b2ade1431266f6a3b0f72024472a5925d4d55eee3485838b442944d4104132f9"} Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.136373 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"a0504edd-5743-4726-bf65-8b0dfb5c29da","Type":"ContainerStarted","Data":"cd203626b0f101523c44727946ea74315ced28d148be996b5866853424da35ea"} Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.140441 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" event={"ID":"50a32d2a-8a43-446a-841a-2f4c8dc0932a","Type":"ContainerStarted","Data":"bf8f00625006236d470624bd78dfeafad771b89a5857a0c3df29228822ea4f1e"} Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.156577 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" event={"ID":"38a1a8aa-ff87-4138-bee6-376ab9e7c2d8","Type":"ContainerDied","Data":"320cad16e35e0e4bb064424a5eed46e192d396d253fcc65d8a3fa96c259a2a44"} Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.156587 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-84dbc59cb8-zx6wj" Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.158899 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" event={"ID":"9677b702-b043-4d23-bb90-11259df9be04","Type":"ContainerStarted","Data":"814c73bf8f86bff45503b623eb7f3a7c5716df8e0c41768ac7e5996041c12160"} Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.163356 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" event={"ID":"0598e713-9aa3-4365-9018-380ac3b9976d","Type":"ContainerStarted","Data":"05a6c3495aa5a4d421a844cad0a56825de381b3a0912435c45e8d06fd3475e2b"} Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.177062 3556 scope.go:117] "RemoveContainer" containerID="475dcb8abb6d7d641d65f2078458a3e7bb4658b68151ec51de543baf01e84c88" Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.290040 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-zx6wj"] Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.300843 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-84dbc59cb8-zx6wj"] Nov 28 00:39:12 crc kubenswrapper[3556]: I1128 00:39:12.920796 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a1a8aa-ff87-4138-bee6-376ab9e7c2d8" path="/var/lib/kubelet/pods/38a1a8aa-ff87-4138-bee6-376ab9e7c2d8/volumes" Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.170461 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" event={"ID":"9677b702-b043-4d23-bb90-11259df9be04","Type":"ContainerStarted","Data":"96afb1422e69967bf6f5a1597354acca7da3f9a0153969891b3b5c2148e210d3"} Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.174132 3556 scope.go:117] "RemoveContainer" containerID="0328ccf4ba09768cd341c0230e69d2e1c1fddc0d83055378ed675a3f0cf03299" Nov 28 00:39:13 crc kubenswrapper[3556]: E1128 00:39:13.174446 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv_service-telemetry(fdf35e1b-f7c9-41e2-8ae8-308f80623968)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" podUID="fdf35e1b-f7c9-41e2-8ae8-308f80623968" Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.178527 3556 generic.go:334] "Generic (PLEG): container finished" podID="e14c79b6-4eba-42d3-89a1-a72e8f3e58b9" containerID="b2ade1431266f6a3b0f72024472a5925d4d55eee3485838b442944d4104132f9" exitCode=0 Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.178573 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" event={"ID":"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9","Type":"ContainerDied","Data":"b2ade1431266f6a3b0f72024472a5925d4d55eee3485838b442944d4104132f9"} Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.178599 3556 scope.go:117] "RemoveContainer" containerID="edac7b411355e8e8bb1f0c81e36e2097dbceb0fcd7a0da28b8912881ec1d7380" Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.178980 3556 scope.go:117] "RemoveContainer" containerID="b2ade1431266f6a3b0f72024472a5925d4d55eee3485838b442944d4104132f9" Nov 28 00:39:13 crc kubenswrapper[3556]: E1128 00:39:13.179303 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w_service-telemetry(e14c79b6-4eba-42d3-89a1-a72e8f3e58b9)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" podUID="e14c79b6-4eba-42d3-89a1-a72e8f3e58b9" Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.183086 3556 generic.go:334] "Generic (PLEG): container finished" podID="50a32d2a-8a43-446a-841a-2f4c8dc0932a" containerID="bf8f00625006236d470624bd78dfeafad771b89a5857a0c3df29228822ea4f1e" exitCode=0 Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.183112 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" event={"ID":"50a32d2a-8a43-446a-841a-2f4c8dc0932a","Type":"ContainerDied","Data":"bf8f00625006236d470624bd78dfeafad771b89a5857a0c3df29228822ea4f1e"} Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.183518 3556 scope.go:117] "RemoveContainer" containerID="bf8f00625006236d470624bd78dfeafad771b89a5857a0c3df29228822ea4f1e" Nov 28 00:39:13 crc kubenswrapper[3556]: E1128 00:39:13.183886 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68_service-telemetry(50a32d2a-8a43-446a-841a-2f4c8dc0932a)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" podUID="50a32d2a-8a43-446a-841a-2f4c8dc0932a" Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.199284 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/default-interconnect-84dbc59cb8-5spw5" podStartSLOduration=7.199241479 podStartE2EDuration="7.199241479s" podCreationTimestamp="2025-11-28 00:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 00:39:13.190834043 +0000 UTC m=+1614.783066043" watchObservedRunningTime="2025-11-28 00:39:13.199241479 +0000 UTC m=+1614.791473459" Nov 28 00:39:13 crc kubenswrapper[3556]: I1128 00:39:13.275219 3556 scope.go:117] "RemoveContainer" containerID="914c5a5849cbdf99b23de32441a1977efd04005378f0349efa0e26b44c530c0a" Nov 28 00:39:14 crc kubenswrapper[3556]: I1128 00:39:14.191452 3556 generic.go:334] "Generic (PLEG): container finished" podID="0598e713-9aa3-4365-9018-380ac3b9976d" containerID="05a6c3495aa5a4d421a844cad0a56825de381b3a0912435c45e8d06fd3475e2b" exitCode=0 Nov 28 00:39:14 crc kubenswrapper[3556]: I1128 00:39:14.191507 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" event={"ID":"0598e713-9aa3-4365-9018-380ac3b9976d","Type":"ContainerDied","Data":"05a6c3495aa5a4d421a844cad0a56825de381b3a0912435c45e8d06fd3475e2b"} Nov 28 00:39:14 crc kubenswrapper[3556]: I1128 00:39:14.191560 3556 scope.go:117] "RemoveContainer" containerID="5c10db132188714ec842d73c97fe8e99b72681f65ccab1a2ad8009a67c3f0577" Nov 28 00:39:14 crc kubenswrapper[3556]: I1128 00:39:14.193441 3556 scope.go:117] "RemoveContainer" containerID="05a6c3495aa5a4d421a844cad0a56825de381b3a0912435c45e8d06fd3475e2b" Nov 28 00:39:14 crc kubenswrapper[3556]: E1128 00:39:14.194291 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk_service-telemetry(0598e713-9aa3-4365-9018-380ac3b9976d)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" podUID="0598e713-9aa3-4365-9018-380ac3b9976d" Nov 28 00:39:14 crc kubenswrapper[3556]: I1128 00:39:14.218275 3556 generic.go:334] "Generic (PLEG): container finished" podID="fc2ffb90-6204-4abc-90b8-1f67c8086a99" containerID="825ec95d8174289c8a9394817d509aedd38a678ea50e4a3b019ab46855309717" exitCode=0 Nov 28 00:39:14 crc kubenswrapper[3556]: I1128 00:39:14.218345 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" event={"ID":"fc2ffb90-6204-4abc-90b8-1f67c8086a99","Type":"ContainerDied","Data":"825ec95d8174289c8a9394817d509aedd38a678ea50e4a3b019ab46855309717"} Nov 28 00:39:14 crc kubenswrapper[3556]: I1128 00:39:14.219260 3556 scope.go:117] "RemoveContainer" containerID="825ec95d8174289c8a9394817d509aedd38a678ea50e4a3b019ab46855309717" Nov 28 00:39:14 crc kubenswrapper[3556]: E1128 00:39:14.219773 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-7866965967-qtslj_service-telemetry(fc2ffb90-6204-4abc-90b8-1f67c8086a99)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" podUID="fc2ffb90-6204-4abc-90b8-1f67c8086a99" Nov 28 00:39:14 crc kubenswrapper[3556]: I1128 00:39:14.260966 3556 scope.go:117] "RemoveContainer" containerID="f58d08f6e1a537e8b6cb4f9d18ebd0008853f4129c3ac4189110a1b300fb5dd3" Nov 28 00:39:18 crc kubenswrapper[3556]: I1128 00:39:18.722267 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:39:18 crc kubenswrapper[3556]: I1128 00:39:18.722892 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:39:18 crc kubenswrapper[3556]: I1128 00:39:18.722943 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:39:18 crc kubenswrapper[3556]: I1128 00:39:18.722986 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:39:18 crc kubenswrapper[3556]: I1128 00:39:18.723037 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.269431 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"a0504edd-5743-4726-bf65-8b0dfb5c29da","Type":"ContainerStarted","Data":"809a74478c32c3bb426645e2e20872cc27164718be837a827194f4e35a83066c"} Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.288247 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.142467902 podStartE2EDuration="10.288198908s" podCreationTimestamp="2025-11-28 00:39:11 +0000 UTC" firstStartedPulling="2025-11-28 00:39:12.083242392 +0000 UTC m=+1613.675474372" lastFinishedPulling="2025-11-28 00:39:20.228973378 +0000 UTC m=+1621.821205378" observedRunningTime="2025-11-28 00:39:21.283088483 +0000 UTC m=+1622.875320503" watchObservedRunningTime="2025-11-28 00:39:21.288198908 +0000 UTC m=+1622.880430898" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.634250 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-pkxn5"] Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.634396 3556 topology_manager.go:215] "Topology Admit Handler" podUID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" podNamespace="service-telemetry" podName="stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.635507 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.637995 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.638194 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.638554 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.638612 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.644446 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-pkxn5"] Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.644786 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.648875 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.781448 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-healthcheck-log\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.781517 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.782196 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzfbv\" (UniqueName: \"kubernetes.io/projected/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-kube-api-access-jzfbv\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.782389 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-publisher\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.782436 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-sensubility-config\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.782480 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.782544 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-config\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.884236 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-publisher\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.884297 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-sensubility-config\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.884342 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.884769 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-config\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.884861 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-healthcheck-log\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.884922 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.884965 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-jzfbv\" (UniqueName: \"kubernetes.io/projected/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-kube-api-access-jzfbv\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.885498 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-sensubility-config\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.885560 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-publisher\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.885770 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.885836 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-config\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.885923 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-healthcheck-log\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.886648 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.904897 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzfbv\" (UniqueName: \"kubernetes.io/projected/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-kube-api-access-jzfbv\") pod \"stf-smoketest-smoke1-pkxn5\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:21 crc kubenswrapper[3556]: I1128 00:39:21.952356 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.034334 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.034487 3556 topology_manager.go:215] "Topology Admit Handler" podUID="d75caad6-f048-446d-bb68-fd2a8d5f98b7" podNamespace="service-telemetry" podName="curl" Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.035298 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.041505 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.086545 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkzn\" (UniqueName: \"kubernetes.io/projected/d75caad6-f048-446d-bb68-fd2a8d5f98b7-kube-api-access-clkzn\") pod \"curl\" (UID: \"d75caad6-f048-446d-bb68-fd2a8d5f98b7\") " pod="service-telemetry/curl" Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.188167 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-clkzn\" (UniqueName: \"kubernetes.io/projected/d75caad6-f048-446d-bb68-fd2a8d5f98b7-kube-api-access-clkzn\") pod \"curl\" (UID: \"d75caad6-f048-446d-bb68-fd2a8d5f98b7\") " pod="service-telemetry/curl" Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.206057 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-pkxn5"] Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.210856 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-clkzn\" (UniqueName: \"kubernetes.io/projected/d75caad6-f048-446d-bb68-fd2a8d5f98b7-kube-api-access-clkzn\") pod \"curl\" (UID: \"d75caad6-f048-446d-bb68-fd2a8d5f98b7\") " pod="service-telemetry/curl" Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.276388 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" event={"ID":"98cbf3b2-6aa1-46ce-a73f-a8e09baed134","Type":"ContainerStarted","Data":"43b5c975f6f01c1143820f1497e4fc66abd0c00e0886696a96fd654136450b13"} Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.352724 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.664474 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.664771 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:39:22 crc kubenswrapper[3556]: W1128 00:39:22.784005 3556 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd75caad6_f048_446d_bb68_fd2a8d5f98b7.slice/crio-4494f1d58a6771697383b7385a53caf432847f6d7c877cee9143033de4875a42 WatchSource:0}: Error finding container 4494f1d58a6771697383b7385a53caf432847f6d7c877cee9143033de4875a42: Status 404 returned error can't find the container with id 4494f1d58a6771697383b7385a53caf432847f6d7c877cee9143033de4875a42 Nov 28 00:39:22 crc kubenswrapper[3556]: I1128 00:39:22.793179 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Nov 28 00:39:23 crc kubenswrapper[3556]: I1128 00:39:23.284004 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"d75caad6-f048-446d-bb68-fd2a8d5f98b7","Type":"ContainerStarted","Data":"4494f1d58a6771697383b7385a53caf432847f6d7c877cee9143033de4875a42"} Nov 28 00:39:24 crc kubenswrapper[3556]: I1128 00:39:24.912928 3556 scope.go:117] "RemoveContainer" containerID="b2ade1431266f6a3b0f72024472a5925d4d55eee3485838b442944d4104132f9" Nov 28 00:39:24 crc kubenswrapper[3556]: I1128 00:39:24.913569 3556 scope.go:117] "RemoveContainer" containerID="bf8f00625006236d470624bd78dfeafad771b89a5857a0c3df29228822ea4f1e" Nov 28 00:39:25 crc kubenswrapper[3556]: I1128 00:39:25.298364 3556 generic.go:334] "Generic (PLEG): container finished" podID="d75caad6-f048-446d-bb68-fd2a8d5f98b7" containerID="e6f1b94a65e20cd8c554cf265d7d5b022041a85e2480411af2fab1ce74960d4a" exitCode=0 Nov 28 00:39:25 crc kubenswrapper[3556]: I1128 00:39:25.298468 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"d75caad6-f048-446d-bb68-fd2a8d5f98b7","Type":"ContainerDied","Data":"e6f1b94a65e20cd8c554cf265d7d5b022041a85e2480411af2fab1ce74960d4a"} Nov 28 00:39:25 crc kubenswrapper[3556]: I1128 00:39:25.301130 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w" event={"ID":"e14c79b6-4eba-42d3-89a1-a72e8f3e58b9","Type":"ContainerStarted","Data":"b544139d503efe8fd92dd1dc0d3c6bc2a6bfdc584de0c2760ff172b5248f97b9"} Nov 28 00:39:25 crc kubenswrapper[3556]: I1128 00:39:25.914234 3556 scope.go:117] "RemoveContainer" containerID="825ec95d8174289c8a9394817d509aedd38a678ea50e4a3b019ab46855309717" Nov 28 00:39:26 crc kubenswrapper[3556]: I1128 00:39:26.314974 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68" event={"ID":"50a32d2a-8a43-446a-841a-2f4c8dc0932a","Type":"ContainerStarted","Data":"4adead211cd858931fba767db304c9d2db7254f708cb492c28239134094a2804"} Nov 28 00:39:27 crc kubenswrapper[3556]: I1128 00:39:27.913245 3556 scope.go:117] "RemoveContainer" containerID="0328ccf4ba09768cd341c0230e69d2e1c1fddc0d83055378ed675a3f0cf03299" Nov 28 00:39:28 crc kubenswrapper[3556]: I1128 00:39:28.923255 3556 scope.go:117] "RemoveContainer" containerID="05a6c3495aa5a4d421a844cad0a56825de381b3a0912435c45e8d06fd3475e2b" Nov 28 00:39:31 crc kubenswrapper[3556]: I1128 00:39:31.572648 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 28 00:39:31 crc kubenswrapper[3556]: I1128 00:39:31.729456 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clkzn\" (UniqueName: \"kubernetes.io/projected/d75caad6-f048-446d-bb68-fd2a8d5f98b7-kube-api-access-clkzn\") pod \"d75caad6-f048-446d-bb68-fd2a8d5f98b7\" (UID: \"d75caad6-f048-446d-bb68-fd2a8d5f98b7\") " Nov 28 00:39:31 crc kubenswrapper[3556]: I1128 00:39:31.734259 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75caad6-f048-446d-bb68-fd2a8d5f98b7-kube-api-access-clkzn" (OuterVolumeSpecName: "kube-api-access-clkzn") pod "d75caad6-f048-446d-bb68-fd2a8d5f98b7" (UID: "d75caad6-f048-446d-bb68-fd2a8d5f98b7"). InnerVolumeSpecName "kube-api-access-clkzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:39:31 crc kubenswrapper[3556]: I1128 00:39:31.762599 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_d75caad6-f048-446d-bb68-fd2a8d5f98b7/curl/0.log" Nov 28 00:39:31 crc kubenswrapper[3556]: I1128 00:39:31.831294 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-clkzn\" (UniqueName: \"kubernetes.io/projected/d75caad6-f048-446d-bb68-fd2a8d5f98b7-kube-api-access-clkzn\") on node \"crc\" DevicePath \"\"" Nov 28 00:39:32 crc kubenswrapper[3556]: I1128 00:39:32.054975 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6755fc87b7-7vwps_a6b10489-636f-4218-9cac-8fc73e3d3e34/prometheus-webhook-snmp/0.log" Nov 28 00:39:32 crc kubenswrapper[3556]: I1128 00:39:32.360350 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"d75caad6-f048-446d-bb68-fd2a8d5f98b7","Type":"ContainerDied","Data":"4494f1d58a6771697383b7385a53caf432847f6d7c877cee9143033de4875a42"} Nov 28 00:39:32 crc kubenswrapper[3556]: I1128 00:39:32.360387 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4494f1d58a6771697383b7385a53caf432847f6d7c877cee9143033de4875a42" Nov 28 00:39:32 crc kubenswrapper[3556]: I1128 00:39:32.360457 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Nov 28 00:39:38 crc kubenswrapper[3556]: I1128 00:39:38.279594 3556 patch_prober.go:28] interesting pod/route-controller-manager-cc767df55-mvwlp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 00:39:38 crc kubenswrapper[3556]: I1128 00:39:38.280125 3556 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-cc767df55-mvwlp" podUID="efeaa66f-42fc-42bc-bbad-71a5047fb302" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 00:39:40 crc kubenswrapper[3556]: I1128 00:39:40.418242 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" event={"ID":"98cbf3b2-6aa1-46ce-a73f-a8e09baed134","Type":"ContainerStarted","Data":"975d8896b86f4ec83e361a6055e512c98a28256c2c86a02a74009e9c7ebfdfa8"} Nov 28 00:39:40 crc kubenswrapper[3556]: I1128 00:39:40.422764 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk" event={"ID":"0598e713-9aa3-4365-9018-380ac3b9976d","Type":"ContainerStarted","Data":"edd7de49c58a2d8e24eef2ffc5f7edd985029d01fa474adf1ecc41b8f36138cc"} Nov 28 00:39:40 crc kubenswrapper[3556]: I1128 00:39:40.428184 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-7866965967-qtslj" event={"ID":"fc2ffb90-6204-4abc-90b8-1f67c8086a99","Type":"ContainerStarted","Data":"25d025263fcfe04d3ce198c5b82ae00315090dff464eee48995e260c4f09ea58"} Nov 28 00:39:40 crc kubenswrapper[3556]: I1128 00:39:40.434283 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv" event={"ID":"fdf35e1b-f7c9-41e2-8ae8-308f80623968","Type":"ContainerStarted","Data":"3840402904c7f27e8937e578aa85a0aa3f62c3733f27b50cb38e7681b3785e4b"} Nov 28 00:39:50 crc kubenswrapper[3556]: I1128 00:39:50.519633 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" event={"ID":"98cbf3b2-6aa1-46ce-a73f-a8e09baed134","Type":"ContainerStarted","Data":"a7023d515a2e1c17cec54a86541d8410bfa15c530c6a2f9ae0743de9c43e79cb"} Nov 28 00:39:50 crc kubenswrapper[3556]: I1128 00:39:50.549204 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" podStartSLOduration=2.205027974 podStartE2EDuration="29.549149601s" podCreationTimestamp="2025-11-28 00:39:21 +0000 UTC" firstStartedPulling="2025-11-28 00:39:22.212734018 +0000 UTC m=+1623.804966008" lastFinishedPulling="2025-11-28 00:39:49.556855645 +0000 UTC m=+1651.149087635" observedRunningTime="2025-11-28 00:39:50.544539697 +0000 UTC m=+1652.136771747" watchObservedRunningTime="2025-11-28 00:39:50.549149601 +0000 UTC m=+1652.141381601" Nov 28 00:39:52 crc kubenswrapper[3556]: I1128 00:39:52.664108 3556 patch_prober.go:28] interesting pod/machine-config-daemon-zpnhg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 00:39:52 crc kubenswrapper[3556]: I1128 00:39:52.664187 3556 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 00:39:52 crc kubenswrapper[3556]: I1128 00:39:52.664230 3556 kubelet.go:2533] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" Nov 28 00:39:52 crc kubenswrapper[3556]: I1128 00:39:52.665139 3556 kuberuntime_manager.go:1029] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2"} pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 00:39:52 crc kubenswrapper[3556]: I1128 00:39:52.665341 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerName="machine-config-daemon" containerID="cri-o://d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" gracePeriod=600 Nov 28 00:39:52 crc kubenswrapper[3556]: E1128 00:39:52.760786 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:39:53 crc kubenswrapper[3556]: I1128 00:39:53.548098 3556 generic.go:334] "Generic (PLEG): container finished" podID="9d0dcce3-d96e-48cb-9b9f-362105911589" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" exitCode=0 Nov 28 00:39:53 crc kubenswrapper[3556]: I1128 00:39:53.548217 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerDied","Data":"d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2"} Nov 28 00:39:53 crc kubenswrapper[3556]: I1128 00:39:53.548348 3556 scope.go:117] "RemoveContainer" containerID="77ce0f6a03e4a0ff03abcc42291734e51c9965a62271e2d0ca1f6177a9180a17" Nov 28 00:39:53 crc kubenswrapper[3556]: I1128 00:39:53.549440 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:39:53 crc kubenswrapper[3556]: E1128 00:39:53.550772 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:40:02 crc kubenswrapper[3556]: I1128 00:40:02.235560 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6755fc87b7-7vwps_a6b10489-636f-4218-9cac-8fc73e3d3e34/prometheus-webhook-snmp/0.log" Nov 28 00:40:08 crc kubenswrapper[3556]: I1128 00:40:08.919244 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:40:08 crc kubenswrapper[3556]: E1128 00:40:08.919941 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:40:13 crc kubenswrapper[3556]: I1128 00:40:13.704368 3556 generic.go:334] "Generic (PLEG): container finished" podID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" containerID="975d8896b86f4ec83e361a6055e512c98a28256c2c86a02a74009e9c7ebfdfa8" exitCode=0 Nov 28 00:40:13 crc kubenswrapper[3556]: I1128 00:40:13.704489 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" event={"ID":"98cbf3b2-6aa1-46ce-a73f-a8e09baed134","Type":"ContainerDied","Data":"975d8896b86f4ec83e361a6055e512c98a28256c2c86a02a74009e9c7ebfdfa8"} Nov 28 00:40:13 crc kubenswrapper[3556]: I1128 00:40:13.706670 3556 scope.go:117] "RemoveContainer" containerID="975d8896b86f4ec83e361a6055e512c98a28256c2c86a02a74009e9c7ebfdfa8" Nov 28 00:40:18 crc kubenswrapper[3556]: I1128 00:40:18.724389 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:40:18 crc kubenswrapper[3556]: I1128 00:40:18.725120 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:40:18 crc kubenswrapper[3556]: I1128 00:40:18.725160 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:40:18 crc kubenswrapper[3556]: I1128 00:40:18.725217 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:40:18 crc kubenswrapper[3556]: I1128 00:40:18.725287 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:40:21 crc kubenswrapper[3556]: I1128 00:40:21.762429 3556 generic.go:334] "Generic (PLEG): container finished" podID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" containerID="a7023d515a2e1c17cec54a86541d8410bfa15c530c6a2f9ae0743de9c43e79cb" exitCode=0 Nov 28 00:40:21 crc kubenswrapper[3556]: I1128 00:40:21.762544 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" event={"ID":"98cbf3b2-6aa1-46ce-a73f-a8e09baed134","Type":"ContainerDied","Data":"a7023d515a2e1c17cec54a86541d8410bfa15c530c6a2f9ae0743de9c43e79cb"} Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.094604 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.259604 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzfbv\" (UniqueName: \"kubernetes.io/projected/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-kube-api-access-jzfbv\") pod \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.259686 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-sensubility-config\") pod \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.259743 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-entrypoint-script\") pod \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.259776 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-publisher\") pod \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.259853 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-entrypoint-script\") pod \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.259900 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-healthcheck-log\") pod \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.259975 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-config\") pod \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\" (UID: \"98cbf3b2-6aa1-46ce-a73f-a8e09baed134\") " Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.272803 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-kube-api-access-jzfbv" (OuterVolumeSpecName: "kube-api-access-jzfbv") pod "98cbf3b2-6aa1-46ce-a73f-a8e09baed134" (UID: "98cbf3b2-6aa1-46ce-a73f-a8e09baed134"). InnerVolumeSpecName "kube-api-access-jzfbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.276061 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "98cbf3b2-6aa1-46ce-a73f-a8e09baed134" (UID: "98cbf3b2-6aa1-46ce-a73f-a8e09baed134"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.277507 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "98cbf3b2-6aa1-46ce-a73f-a8e09baed134" (UID: "98cbf3b2-6aa1-46ce-a73f-a8e09baed134"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.278303 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "98cbf3b2-6aa1-46ce-a73f-a8e09baed134" (UID: "98cbf3b2-6aa1-46ce-a73f-a8e09baed134"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.283898 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "98cbf3b2-6aa1-46ce-a73f-a8e09baed134" (UID: "98cbf3b2-6aa1-46ce-a73f-a8e09baed134"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.285910 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "98cbf3b2-6aa1-46ce-a73f-a8e09baed134" (UID: "98cbf3b2-6aa1-46ce-a73f-a8e09baed134"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.314218 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "98cbf3b2-6aa1-46ce-a73f-a8e09baed134" (UID: "98cbf3b2-6aa1-46ce-a73f-a8e09baed134"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.362507 3556 reconciler_common.go:300] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-config\") on node \"crc\" DevicePath \"\"" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.362648 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jzfbv\" (UniqueName: \"kubernetes.io/projected/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-kube-api-access-jzfbv\") on node \"crc\" DevicePath \"\"" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.362734 3556 reconciler_common.go:300] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-sensubility-config\") on node \"crc\" DevicePath \"\"" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.362826 3556 reconciler_common.go:300] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.362912 3556 reconciler_common.go:300] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.362995 3556 reconciler_common.go:300] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.363085 3556 reconciler_common.go:300] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98cbf3b2-6aa1-46ce-a73f-a8e09baed134-healthcheck-log\") on node \"crc\" DevicePath \"\"" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.781347 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" event={"ID":"98cbf3b2-6aa1-46ce-a73f-a8e09baed134","Type":"ContainerDied","Data":"43b5c975f6f01c1143820f1497e4fc66abd0c00e0886696a96fd654136450b13"} Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.781398 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b5c975f6f01c1143820f1497e4fc66abd0c00e0886696a96fd654136450b13" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.781469 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-pkxn5" Nov 28 00:40:23 crc kubenswrapper[3556]: I1128 00:40:23.914812 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:40:23 crc kubenswrapper[3556]: E1128 00:40:23.916104 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:40:25 crc kubenswrapper[3556]: I1128 00:40:25.299167 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-pkxn5_98cbf3b2-6aa1-46ce-a73f-a8e09baed134/smoketest-collectd/0.log" Nov 28 00:40:25 crc kubenswrapper[3556]: I1128 00:40:25.609238 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-pkxn5_98cbf3b2-6aa1-46ce-a73f-a8e09baed134/smoketest-ceilometer/0.log" Nov 28 00:40:25 crc kubenswrapper[3556]: I1128 00:40:25.895726 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-84dbc59cb8-5spw5_9677b702-b043-4d23-bb90-11259df9be04/default-interconnect/0.log" Nov 28 00:40:26 crc kubenswrapper[3556]: I1128 00:40:26.245459 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68_50a32d2a-8a43-446a-841a-2f4c8dc0932a/bridge/2.log" Nov 28 00:40:26 crc kubenswrapper[3556]: I1128 00:40:26.590558 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-68d7cdf9d4-76z68_50a32d2a-8a43-446a-841a-2f4c8dc0932a/sg-core/0.log" Nov 28 00:40:26 crc kubenswrapper[3556]: I1128 00:40:26.866916 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w_e14c79b6-4eba-42d3-89a1-a72e8f3e58b9/bridge/2.log" Nov 28 00:40:27 crc kubenswrapper[3556]: I1128 00:40:27.166696 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5f8d7f4bcc-vfj9w_e14c79b6-4eba-42d3-89a1-a72e8f3e58b9/sg-core/0.log" Nov 28 00:40:27 crc kubenswrapper[3556]: I1128 00:40:27.508728 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-7866965967-qtslj_fc2ffb90-6204-4abc-90b8-1f67c8086a99/bridge/2.log" Nov 28 00:40:27 crc kubenswrapper[3556]: I1128 00:40:27.817913 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-7866965967-qtslj_fc2ffb90-6204-4abc-90b8-1f67c8086a99/sg-core/0.log" Nov 28 00:40:28 crc kubenswrapper[3556]: I1128 00:40:28.125536 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk_0598e713-9aa3-4365-9018-380ac3b9976d/bridge/2.log" Nov 28 00:40:28 crc kubenswrapper[3556]: I1128 00:40:28.395678 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-5cbcc95799-n4vfk_0598e713-9aa3-4365-9018-380ac3b9976d/sg-core/0.log" Nov 28 00:40:28 crc kubenswrapper[3556]: I1128 00:40:28.703978 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv_fdf35e1b-f7c9-41e2-8ae8-308f80623968/bridge/2.log" Nov 28 00:40:29 crc kubenswrapper[3556]: I1128 00:40:29.030514 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-c774c44f7-s8tsv_fdf35e1b-f7c9-41e2-8ae8-308f80623968/sg-core/0.log" Nov 28 00:40:32 crc kubenswrapper[3556]: I1128 00:40:32.035537 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-b4d9f888-97cvc_8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd/operator/0.log" Nov 28 00:40:32 crc kubenswrapper[3556]: I1128 00:40:32.368193 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_0c7c2afb-f325-4137-96a0-e217c2240fb1/prometheus/0.log" Nov 28 00:40:32 crc kubenswrapper[3556]: I1128 00:40:32.718839 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_df285d49-46a0-4b41-8d8b-7493edd5e268/elasticsearch/0.log" Nov 28 00:40:33 crc kubenswrapper[3556]: I1128 00:40:33.052996 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6755fc87b7-7vwps_a6b10489-636f-4218-9cac-8fc73e3d3e34/prometheus-webhook-snmp/0.log" Nov 28 00:40:33 crc kubenswrapper[3556]: I1128 00:40:33.419280 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_31086a87-8ffa-4122-9924-f46df3be87fd/alertmanager/0.log" Nov 28 00:40:36 crc kubenswrapper[3556]: I1128 00:40:36.913565 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:40:36 crc kubenswrapper[3556]: E1128 00:40:36.915734 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:40:48 crc kubenswrapper[3556]: I1128 00:40:48.802844 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-7f585466fb-ww6s8_c852c1b7-7cec-4ae1-a067-0b7bcda673ca/operator/0.log" Nov 28 00:40:48 crc kubenswrapper[3556]: I1128 00:40:48.919931 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:40:48 crc kubenswrapper[3556]: E1128 00:40:48.920829 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:40:52 crc kubenswrapper[3556]: I1128 00:40:52.126980 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-b4d9f888-97cvc_8eacead8-ae3d-4d50-b9b4-4f7c4261fbbd/operator/0.log" Nov 28 00:40:52 crc kubenswrapper[3556]: I1128 00:40:52.431643 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_a0504edd-5743-4726-bf65-8b0dfb5c29da/qdr/0.log" Nov 28 00:40:59 crc kubenswrapper[3556]: I1128 00:40:59.913639 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:40:59 crc kubenswrapper[3556]: E1128 00:40:59.915367 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:41:14 crc kubenswrapper[3556]: I1128 00:41:14.913740 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:41:14 crc kubenswrapper[3556]: E1128 00:41:14.915442 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.761655 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d9rrs/must-gather-5nz4g"] Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.762054 3556 topology_manager.go:215] "Topology Admit Handler" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" podNamespace="openshift-must-gather-d9rrs" podName="must-gather-5nz4g" Nov 28 00:41:17 crc kubenswrapper[3556]: E1128 00:41:17.762298 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" containerName="smoketest-ceilometer" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.762316 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" containerName="smoketest-ceilometer" Nov 28 00:41:17 crc kubenswrapper[3556]: E1128 00:41:17.762346 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" containerName="smoketest-collectd" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.762359 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" containerName="smoketest-collectd" Nov 28 00:41:17 crc kubenswrapper[3556]: E1128 00:41:17.762376 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="d75caad6-f048-446d-bb68-fd2a8d5f98b7" containerName="curl" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.762386 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75caad6-f048-446d-bb68-fd2a8d5f98b7" containerName="curl" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.762590 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" containerName="smoketest-collectd" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.762609 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="98cbf3b2-6aa1-46ce-a73f-a8e09baed134" containerName="smoketest-ceilometer" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.762628 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="d75caad6-f048-446d-bb68-fd2a8d5f98b7" containerName="curl" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.763661 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.773832 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d9rrs/must-gather-5nz4g"] Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.775438 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d9rrs"/"kube-root-ca.crt" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.775988 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d9rrs"/"openshift-service-ca.crt" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.868768 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gdl2\" (UniqueName: \"kubernetes.io/projected/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-kube-api-access-4gdl2\") pod \"must-gather-5nz4g\" (UID: \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\") " pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.868831 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-must-gather-output\") pod \"must-gather-5nz4g\" (UID: \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\") " pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.970281 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-4gdl2\" (UniqueName: \"kubernetes.io/projected/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-kube-api-access-4gdl2\") pod \"must-gather-5nz4g\" (UID: \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\") " pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.970353 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-must-gather-output\") pod \"must-gather-5nz4g\" (UID: \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\") " pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.970773 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-must-gather-output\") pod \"must-gather-5nz4g\" (UID: \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\") " pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:41:17 crc kubenswrapper[3556]: I1128 00:41:17.991960 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gdl2\" (UniqueName: \"kubernetes.io/projected/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-kube-api-access-4gdl2\") pod \"must-gather-5nz4g\" (UID: \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\") " pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:41:18 crc kubenswrapper[3556]: I1128 00:41:18.079623 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:41:18 crc kubenswrapper[3556]: I1128 00:41:18.489317 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d9rrs/must-gather-5nz4g"] Nov 28 00:41:18 crc kubenswrapper[3556]: I1128 00:41:18.725660 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:41:18 crc kubenswrapper[3556]: I1128 00:41:18.725741 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:41:18 crc kubenswrapper[3556]: I1128 00:41:18.725779 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:41:18 crc kubenswrapper[3556]: I1128 00:41:18.725807 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:41:18 crc kubenswrapper[3556]: I1128 00:41:18.725834 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:41:19 crc kubenswrapper[3556]: I1128 00:41:19.170403 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" event={"ID":"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5","Type":"ContainerStarted","Data":"9c5b15c4067e95c370f30d675d995f112c354deefb5a762a0638216044a0d672"} Nov 28 00:41:27 crc kubenswrapper[3556]: I1128 00:41:27.219812 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" event={"ID":"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5","Type":"ContainerStarted","Data":"7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8"} Nov 28 00:41:27 crc kubenswrapper[3556]: I1128 00:41:27.220178 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" event={"ID":"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5","Type":"ContainerStarted","Data":"67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331"} Nov 28 00:41:27 crc kubenswrapper[3556]: I1128 00:41:27.235938 3556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" podStartSLOduration=2.290234354 podStartE2EDuration="10.235888015s" podCreationTimestamp="2025-11-28 00:41:17 +0000 UTC" firstStartedPulling="2025-11-28 00:41:18.503709983 +0000 UTC m=+1740.095942003" lastFinishedPulling="2025-11-28 00:41:26.449363674 +0000 UTC m=+1748.041595664" observedRunningTime="2025-11-28 00:41:27.235189828 +0000 UTC m=+1748.827421818" watchObservedRunningTime="2025-11-28 00:41:27.235888015 +0000 UTC m=+1748.828120015" Nov 28 00:41:27 crc kubenswrapper[3556]: I1128 00:41:27.913089 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:41:27 crc kubenswrapper[3556]: E1128 00:41:27.913793 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:41:39 crc kubenswrapper[3556]: I1128 00:41:39.913337 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:41:39 crc kubenswrapper[3556]: E1128 00:41:39.914226 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:41:53 crc kubenswrapper[3556]: I1128 00:41:53.913376 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:41:53 crc kubenswrapper[3556]: E1128 00:41:53.914245 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:42:05 crc kubenswrapper[3556]: I1128 00:42:05.914342 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:42:05 crc kubenswrapper[3556]: E1128 00:42:05.915107 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:42:12 crc kubenswrapper[3556]: I1128 00:42:12.126743 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-649bd778b4-tt5tw_45a8038e-e7f2-4d93-a6f5-7753aa54e63f/control-plane-machine-set-operator/3.log" Nov 28 00:42:12 crc kubenswrapper[3556]: I1128 00:42:12.276604 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/kube-rbac-proxy/1.log" Nov 28 00:42:12 crc kubenswrapper[3556]: I1128 00:42:12.310929 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-788b7c6b6c-ctdmb_4f8aa612-9da0-4a2b-911e-6a1764a4e74e/machine-api-operator/2.log" Nov 28 00:42:18 crc kubenswrapper[3556]: I1128 00:42:18.726276 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:42:18 crc kubenswrapper[3556]: I1128 00:42:18.726770 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:42:18 crc kubenswrapper[3556]: I1128 00:42:18.726798 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:42:18 crc kubenswrapper[3556]: I1128 00:42:18.726825 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:42:18 crc kubenswrapper[3556]: I1128 00:42:18.726852 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:42:18 crc kubenswrapper[3556]: I1128 00:42:18.916141 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:42:18 crc kubenswrapper[3556]: E1128 00:42:18.916814 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:42:25 crc kubenswrapper[3556]: I1128 00:42:25.364134 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-755d7666d5-kjtgj_c5833d33-da6c-4528-8318-b5778f1cc080/cert-manager-controller/0.log" Nov 28 00:42:25 crc kubenswrapper[3556]: I1128 00:42:25.451158 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-6dcc74f67d-2k68d_d7520d61-bf39-4dc2-a2a7-1d23584f20f7/cert-manager-cainjector/0.log" Nov 28 00:42:25 crc kubenswrapper[3556]: I1128 00:42:25.522129 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-58ffc98b58-4dq5m_eeef082f-da5f-460c-bd45-41d7602f97ef/cert-manager-webhook/0.log" Nov 28 00:42:33 crc kubenswrapper[3556]: I1128 00:42:33.913403 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:42:33 crc kubenswrapper[3556]: E1128 00:42:33.914640 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:42:41 crc kubenswrapper[3556]: I1128 00:42:41.971769 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j_c9c2afcd-78bb-4f35-b692-6bb9c4cca46e/util/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.178292 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j_c9c2afcd-78bb-4f35-b692-6bb9c4cca46e/util/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.318480 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j_c9c2afcd-78bb-4f35-b692-6bb9c4cca46e/pull/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.324379 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j_c9c2afcd-78bb-4f35-b692-6bb9c4cca46e/pull/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.525278 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j_c9c2afcd-78bb-4f35-b692-6bb9c4cca46e/extract/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.536546 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j_c9c2afcd-78bb-4f35-b692-6bb9c4cca46e/util/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.556792 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69qlz9j_c9c2afcd-78bb-4f35-b692-6bb9c4cca46e/pull/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.739748 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd_3bc470cf-2bf2-4551-8f7b-85c8d6e3005c/util/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.905483 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd_3bc470cf-2bf2-4551-8f7b-85c8d6e3005c/util/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.916405 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd_3bc470cf-2bf2-4551-8f7b-85c8d6e3005c/pull/0.log" Nov 28 00:42:42 crc kubenswrapper[3556]: I1128 00:42:42.926273 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd_3bc470cf-2bf2-4551-8f7b-85c8d6e3005c/pull/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.070184 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd_3bc470cf-2bf2-4551-8f7b-85c8d6e3005c/extract/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.097049 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd_3bc470cf-2bf2-4551-8f7b-85c8d6e3005c/pull/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.118929 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qzlfd_3bc470cf-2bf2-4551-8f7b-85c8d6e3005c/util/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.275232 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444_f3824391-427a-4382-9971-0a119acc3392/util/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.442756 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444_f3824391-427a-4382-9971-0a119acc3392/pull/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.465437 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444_f3824391-427a-4382-9971-0a119acc3392/util/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.481579 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444_f3824391-427a-4382-9971-0a119acc3392/pull/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.626230 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444_f3824391-427a-4382-9971-0a119acc3392/util/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.627067 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444_f3824391-427a-4382-9971-0a119acc3392/pull/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.650033 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbh444_f3824391-427a-4382-9971-0a119acc3392/extract/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.833883 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5_996c7ba9-f850-43cf-8cc9-37ed57473f15/util/0.log" Nov 28 00:42:43 crc kubenswrapper[3556]: I1128 00:42:43.982167 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5_996c7ba9-f850-43cf-8cc9-37ed57473f15/pull/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.003356 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5_996c7ba9-f850-43cf-8cc9-37ed57473f15/util/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.057432 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5_996c7ba9-f850-43cf-8cc9-37ed57473f15/pull/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.200771 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5_996c7ba9-f850-43cf-8cc9-37ed57473f15/pull/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.208424 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5_996c7ba9-f850-43cf-8cc9-37ed57473f15/util/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.223178 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ej22q5_996c7ba9-f850-43cf-8cc9-37ed57473f15/extract/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.662196 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fqbbn_9ccee53e-7afd-4302-8b8e-5dfc9c4b5976/extract-content/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.704654 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fqbbn_9ccee53e-7afd-4302-8b8e-5dfc9c4b5976/extract-content/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.741805 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fqbbn_9ccee53e-7afd-4302-8b8e-5dfc9c4b5976/extract-utilities/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.846806 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fqbbn_9ccee53e-7afd-4302-8b8e-5dfc9c4b5976/extract-utilities/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.903539 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fqbbn_9ccee53e-7afd-4302-8b8e-5dfc9c4b5976/extract-utilities/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.919231 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fqbbn_9ccee53e-7afd-4302-8b8e-5dfc9c4b5976/registry-server/0.log" Nov 28 00:42:44 crc kubenswrapper[3556]: I1128 00:42:44.928629 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fqbbn_9ccee53e-7afd-4302-8b8e-5dfc9c4b5976/extract-content/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.053421 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbw72_23e1da55-6d41-441d-9587-9b9c74e80d23/extract-utilities/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.255840 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbw72_23e1da55-6d41-441d-9587-9b9c74e80d23/extract-content/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.255990 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbw72_23e1da55-6d41-441d-9587-9b9c74e80d23/extract-content/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.262475 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbw72_23e1da55-6d41-441d-9587-9b9c74e80d23/extract-utilities/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.414992 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbw72_23e1da55-6d41-441d-9587-9b9c74e80d23/extract-content/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.415608 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbw72_23e1da55-6d41-441d-9587-9b9c74e80d23/extract-utilities/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.419051 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kbw72_23e1da55-6d41-441d-9587-9b9c74e80d23/registry-server/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.460828 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-8b455464d-xd2kb_57b23f79-74b4-4ba9-bf50-aeaa322b31df/marketplace-operator/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.589885 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fbgdp_3fe93442-3fb2-4ae8-ade9-110f5702aa99/extract-utilities/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.791704 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fbgdp_3fe93442-3fb2-4ae8-ade9-110f5702aa99/extract-content/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.798332 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fbgdp_3fe93442-3fb2-4ae8-ade9-110f5702aa99/extract-content/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.800850 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fbgdp_3fe93442-3fb2-4ae8-ade9-110f5702aa99/extract-utilities/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.947534 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fbgdp_3fe93442-3fb2-4ae8-ade9-110f5702aa99/extract-utilities/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.947543 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fbgdp_3fe93442-3fb2-4ae8-ade9-110f5702aa99/extract-content/0.log" Nov 28 00:42:45 crc kubenswrapper[3556]: I1128 00:42:45.955678 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fbgdp_3fe93442-3fb2-4ae8-ade9-110f5702aa99/registry-server/0.log" Nov 28 00:42:46 crc kubenswrapper[3556]: I1128 00:42:46.913866 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:42:46 crc kubenswrapper[3556]: E1128 00:42:46.914426 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:42:58 crc kubenswrapper[3556]: I1128 00:42:58.402442 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-864b67f9b9-8jfq9_bc793216-a760-4653-9d22-4744eb2ac5b3/prometheus-operator/0.log" Nov 28 00:42:58 crc kubenswrapper[3556]: I1128 00:42:58.533785 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-84dd4b856b-78lc2_e3267f68-5450-454b-8ce9-39e0039c4f6f/prometheus-operator-admission-webhook/0.log" Nov 28 00:42:58 crc kubenswrapper[3556]: I1128 00:42:58.586797 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-84dd4b856b-zns8k_3ff69e08-3c02-49ce-92a7-6a30d3c6191e/prometheus-operator-admission-webhook/0.log" Nov 28 00:42:58 crc kubenswrapper[3556]: I1128 00:42:58.708784 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-65df589ff7-dmlxl_02a59992-a6d8-4bb1-b714-9c47f7af71f8/operator/0.log" Nov 28 00:42:58 crc kubenswrapper[3556]: I1128 00:42:58.780397 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-574fd8d65d-gdfw7_9ce9f8fc-09c2-48c2-8304-f0b1a010b9e4/perses-operator/0.log" Nov 28 00:42:58 crc kubenswrapper[3556]: I1128 00:42:58.918031 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:42:58 crc kubenswrapper[3556]: E1128 00:42:58.919184 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:43:10 crc kubenswrapper[3556]: I1128 00:43:10.914271 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:43:10 crc kubenswrapper[3556]: E1128 00:43:10.918954 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:43:18 crc kubenswrapper[3556]: I1128 00:43:18.727129 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:43:18 crc kubenswrapper[3556]: I1128 00:43:18.727763 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:43:18 crc kubenswrapper[3556]: I1128 00:43:18.727849 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:43:18 crc kubenswrapper[3556]: I1128 00:43:18.727904 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:43:18 crc kubenswrapper[3556]: I1128 00:43:18.727947 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:43:22 crc kubenswrapper[3556]: I1128 00:43:22.916921 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:43:22 crc kubenswrapper[3556]: E1128 00:43:22.918477 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:43:35 crc kubenswrapper[3556]: I1128 00:43:35.914294 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:43:35 crc kubenswrapper[3556]: E1128 00:43:35.915087 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:43:44 crc kubenswrapper[3556]: I1128 00:43:44.169878 3556 generic.go:334] "Generic (PLEG): container finished" podID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerID="67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331" exitCode=0 Nov 28 00:43:44 crc kubenswrapper[3556]: I1128 00:43:44.169989 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" event={"ID":"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5","Type":"ContainerDied","Data":"67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331"} Nov 28 00:43:44 crc kubenswrapper[3556]: I1128 00:43:44.171340 3556 scope.go:117] "RemoveContainer" containerID="67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331" Nov 28 00:43:44 crc kubenswrapper[3556]: I1128 00:43:44.499111 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d9rrs_must-gather-5nz4g_ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5/gather/0.log" Nov 28 00:43:48 crc kubenswrapper[3556]: I1128 00:43:48.918070 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:43:48 crc kubenswrapper[3556]: E1128 00:43:48.919410 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.101502 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d9rrs/must-gather-5nz4g"] Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.102508 3556 kuberuntime_container.go:770] "Killing container with a grace period" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerName="copy" containerID="cri-o://7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8" gracePeriod=2 Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.115643 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d9rrs/must-gather-5nz4g"] Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.484563 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d9rrs_must-gather-5nz4g_ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5/copy/0.log" Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.485177 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.600590 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gdl2\" (UniqueName: \"kubernetes.io/projected/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-kube-api-access-4gdl2\") pod \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\" (UID: \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\") " Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.600858 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-must-gather-output\") pod \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\" (UID: \"ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5\") " Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.610035 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-kube-api-access-4gdl2" (OuterVolumeSpecName: "kube-api-access-4gdl2") pod "ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" (UID: "ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5"). InnerVolumeSpecName "kube-api-access-4gdl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.677186 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" (UID: "ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.702785 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4gdl2\" (UniqueName: \"kubernetes.io/projected/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-kube-api-access-4gdl2\") on node \"crc\" DevicePath \"\"" Nov 28 00:43:51 crc kubenswrapper[3556]: I1128 00:43:51.702827 3556 reconciler_common.go:300] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.225943 3556 logs.go:325] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d9rrs_must-gather-5nz4g_ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5/copy/0.log" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.226608 3556 generic.go:334] "Generic (PLEG): container finished" podID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerID="7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8" exitCode=143 Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.226629 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d9rrs/must-gather-5nz4g" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.226648 3556 scope.go:117] "RemoveContainer" containerID="7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.251109 3556 scope.go:117] "RemoveContainer" containerID="67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.295271 3556 scope.go:117] "RemoveContainer" containerID="7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8" Nov 28 00:43:52 crc kubenswrapper[3556]: E1128 00:43:52.295824 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8\": container with ID starting with 7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8 not found: ID does not exist" containerID="7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.295922 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8"} err="failed to get container status \"7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8\": rpc error: code = NotFound desc = could not find container \"7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8\": container with ID starting with 7c78f7fa2fc474b29cf69dce2cac66f16d735efc4de7ba556809154bb3ba35a8 not found: ID does not exist" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.295945 3556 scope.go:117] "RemoveContainer" containerID="67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331" Nov 28 00:43:52 crc kubenswrapper[3556]: E1128 00:43:52.296298 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331\": container with ID starting with 67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331 not found: ID does not exist" containerID="67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.296349 3556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331"} err="failed to get container status \"67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331\": rpc error: code = NotFound desc = could not find container \"67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331\": container with ID starting with 67264a4491ca59c5dd61c69aa16e3933d9d677136f32ad1d45dff00ffe11b331 not found: ID does not exist" Nov 28 00:43:52 crc kubenswrapper[3556]: I1128 00:43:52.930130 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" path="/var/lib/kubelet/pods/ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5/volumes" Nov 28 00:44:00 crc kubenswrapper[3556]: I1128 00:44:00.913694 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:44:00 crc kubenswrapper[3556]: E1128 00:44:00.915410 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:44:14 crc kubenswrapper[3556]: I1128 00:44:14.913867 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:44:14 crc kubenswrapper[3556]: E1128 00:44:14.915509 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:44:18 crc kubenswrapper[3556]: I1128 00:44:18.728323 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:44:18 crc kubenswrapper[3556]: I1128 00:44:18.728749 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:44:18 crc kubenswrapper[3556]: I1128 00:44:18.728808 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:44:18 crc kubenswrapper[3556]: I1128 00:44:18.728875 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:44:18 crc kubenswrapper[3556]: I1128 00:44:18.728932 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:44:28 crc kubenswrapper[3556]: I1128 00:44:28.916031 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:44:28 crc kubenswrapper[3556]: E1128 00:44:28.919093 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:44:42 crc kubenswrapper[3556]: I1128 00:44:42.913754 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:44:42 crc kubenswrapper[3556]: E1128 00:44:42.915537 3556 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zpnhg_openshift-machine-config-operator(9d0dcce3-d96e-48cb-9b9f-362105911589)\"" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" podUID="9d0dcce3-d96e-48cb-9b9f-362105911589" Nov 28 00:44:53 crc kubenswrapper[3556]: I1128 00:44:53.913770 3556 scope.go:117] "RemoveContainer" containerID="d40cd6b45a9406c567a72a40b9f37be5483a078bec159f9c4b474eddc52bbed2" Nov 28 00:44:54 crc kubenswrapper[3556]: I1128 00:44:54.764575 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zpnhg" event={"ID":"9d0dcce3-d96e-48cb-9b9f-362105911589","Type":"ContainerStarted","Data":"0f4a7d1802bfa131dcb1f2223158b5d4648015f272990fa68763fb8e18f0ba94"} Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.179326 3556 kubelet.go:2429] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c"] Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.180261 3556 topology_manager.go:215] "Topology Admit Handler" podUID="96202c03-cfb8-4ae3-89c3-99b7ad93d260" podNamespace="openshift-operator-lifecycle-manager" podName="collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: E1128 00:45:00.180560 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerName="gather" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.180580 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerName="gather" Nov 28 00:45:00 crc kubenswrapper[3556]: E1128 00:45:00.180603 3556 cpu_manager.go:396] "RemoveStaleState: removing container" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerName="copy" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.180618 3556 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerName="copy" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.180874 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerName="gather" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.180923 3556 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec530b07-d9b0-47ff-9a8d-9b24e50c6dc5" containerName="copy" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.182044 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.224869 3556 reflector.go:351] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-45g9d" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.225086 3556 reflector.go:351] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.228875 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96202c03-cfb8-4ae3-89c3-99b7ad93d260-config-volume\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.229152 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96202c03-cfb8-4ae3-89c3-99b7ad93d260-secret-volume\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.229339 3556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdwh4\" (UniqueName: \"kubernetes.io/projected/96202c03-cfb8-4ae3-89c3-99b7ad93d260-kube-api-access-xdwh4\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.231896 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c"] Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.330234 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"kube-api-access-xdwh4\" (UniqueName: \"kubernetes.io/projected/96202c03-cfb8-4ae3-89c3-99b7ad93d260-kube-api-access-xdwh4\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.330549 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96202c03-cfb8-4ae3-89c3-99b7ad93d260-config-volume\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.330672 3556 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96202c03-cfb8-4ae3-89c3-99b7ad93d260-secret-volume\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.332427 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96202c03-cfb8-4ae3-89c3-99b7ad93d260-config-volume\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.343824 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96202c03-cfb8-4ae3-89c3-99b7ad93d260-secret-volume\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.346880 3556 operation_generator.go:721] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdwh4\" (UniqueName: \"kubernetes.io/projected/96202c03-cfb8-4ae3-89c3-99b7ad93d260-kube-api-access-xdwh4\") pod \"collect-profiles-29404845-9d94c\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.546209 3556 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:00 crc kubenswrapper[3556]: I1128 00:45:00.842821 3556 kubelet.go:2436] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c"] Nov 28 00:45:01 crc kubenswrapper[3556]: I1128 00:45:01.841797 3556 generic.go:334] "Generic (PLEG): container finished" podID="96202c03-cfb8-4ae3-89c3-99b7ad93d260" containerID="a9e0e9a33fdbc9fe0c71ec7bb83095e1e49bf5dba915eed59a3e7e4c331896c6" exitCode=0 Nov 28 00:45:01 crc kubenswrapper[3556]: I1128 00:45:01.841880 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" event={"ID":"96202c03-cfb8-4ae3-89c3-99b7ad93d260","Type":"ContainerDied","Data":"a9e0e9a33fdbc9fe0c71ec7bb83095e1e49bf5dba915eed59a3e7e4c331896c6"} Nov 28 00:45:01 crc kubenswrapper[3556]: I1128 00:45:01.842187 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" event={"ID":"96202c03-cfb8-4ae3-89c3-99b7ad93d260","Type":"ContainerStarted","Data":"865d73a5edc58910c208f4f9b1058cc1e7a81b4d5b5c94effeb235ef753af4c8"} Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.129452 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.176715 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdwh4\" (UniqueName: \"kubernetes.io/projected/96202c03-cfb8-4ae3-89c3-99b7ad93d260-kube-api-access-xdwh4\") pod \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.176803 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96202c03-cfb8-4ae3-89c3-99b7ad93d260-config-volume\") pod \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.176888 3556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96202c03-cfb8-4ae3-89c3-99b7ad93d260-secret-volume\") pod \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\" (UID: \"96202c03-cfb8-4ae3-89c3-99b7ad93d260\") " Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.178054 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96202c03-cfb8-4ae3-89c3-99b7ad93d260-config-volume" (OuterVolumeSpecName: "config-volume") pod "96202c03-cfb8-4ae3-89c3-99b7ad93d260" (UID: "96202c03-cfb8-4ae3-89c3-99b7ad93d260"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.184201 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96202c03-cfb8-4ae3-89c3-99b7ad93d260-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "96202c03-cfb8-4ae3-89c3-99b7ad93d260" (UID: "96202c03-cfb8-4ae3-89c3-99b7ad93d260"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.184479 3556 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96202c03-cfb8-4ae3-89c3-99b7ad93d260-kube-api-access-xdwh4" (OuterVolumeSpecName: "kube-api-access-xdwh4") pod "96202c03-cfb8-4ae3-89c3-99b7ad93d260" (UID: "96202c03-cfb8-4ae3-89c3-99b7ad93d260"). InnerVolumeSpecName "kube-api-access-xdwh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.278175 3556 reconciler_common.go:300] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96202c03-cfb8-4ae3-89c3-99b7ad93d260-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.278207 3556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xdwh4\" (UniqueName: \"kubernetes.io/projected/96202c03-cfb8-4ae3-89c3-99b7ad93d260-kube-api-access-xdwh4\") on node \"crc\" DevicePath \"\"" Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.278219 3556 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96202c03-cfb8-4ae3-89c3-99b7ad93d260-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.859414 3556 kubelet.go:2461] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" event={"ID":"96202c03-cfb8-4ae3-89c3-99b7ad93d260","Type":"ContainerDied","Data":"865d73a5edc58910c208f4f9b1058cc1e7a81b4d5b5c94effeb235ef753af4c8"} Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.859729 3556 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="865d73a5edc58910c208f4f9b1058cc1e7a81b4d5b5c94effeb235ef753af4c8" Nov 28 00:45:03 crc kubenswrapper[3556]: I1128 00:45:03.859495 3556 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404845-9d94c" Nov 28 00:45:04 crc kubenswrapper[3556]: I1128 00:45:04.226853 3556 kubelet.go:2445] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Nov 28 00:45:04 crc kubenswrapper[3556]: I1128 00:45:04.233909 3556 kubelet.go:2439] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29251950-x8jjd"] Nov 28 00:45:04 crc kubenswrapper[3556]: I1128 00:45:04.927771 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad171c4b-8408-4370-8e86-502999788ddb" path="/var/lib/kubelet/pods/ad171c4b-8408-4370-8e86-502999788ddb/volumes" Nov 28 00:45:18 crc kubenswrapper[3556]: I1128 00:45:18.730111 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" status="Running" Nov 28 00:45:18 crc kubenswrapper[3556]: I1128 00:45:18.730790 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" status="Running" Nov 28 00:45:18 crc kubenswrapper[3556]: I1128 00:45:18.730840 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-etcd/etcd-crc" status="Running" Nov 28 00:45:18 crc kubenswrapper[3556]: I1128 00:45:18.730896 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-kube-apiserver/kube-apiserver-crc" status="Running" Nov 28 00:45:18 crc kubenswrapper[3556]: I1128 00:45:18.730945 3556 kubelet_getters.go:187] "Pod status updated" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" status="Running" Nov 28 00:45:20 crc kubenswrapper[3556]: E1128 00:45:20.305190 3556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89\": container with ID starting with 67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89 not found: ID does not exist" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" Nov 28 00:45:20 crc kubenswrapper[3556]: I1128 00:45:20.305606 3556 kuberuntime_gc.go:360] "Error getting ContainerStatus for containerID" containerID="67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89" err="rpc error: code = NotFound desc = could not find container \"67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89\": container with ID starting with 67968268b9681a78ea8ff7d1d622336aeef2dd395719c809f7d90fd4229e2e89 not found: ID does not exist" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515112170322024437 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015112170323017355 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015112163766016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015112163766015465 5ustar corecore